Cover Page

Conducting and Using Evaluative Site Visits

Randi K. Nelson
Denise L. Roseland


New Directions for Evaluation

Sponsored by the American Evaluation Association


Leslie A. Fierro
Claremont Graduate University
Todd M. Franke
University of California Los Angeles


Anne Vo
University of Southern California
Bianca Montrosse-Moorhead
University of Connecticut
Christina Christie
University of California Los Angeles
Cynthia Phillips
National Science Foundation*
Debra Joy Perez
Gordon and Betty Moore Foundation
Debra Rog
Westat and the Rockville Institute
Gail Barrington
Barrington Research Group, Inc.
Isabelle Bourgeois
École Nationale d'Administration Publique (ÉNAP)
James McDavid
University of Victoria
John Gargani
Gargani + Company
Katrina Bledsoe
EDC and Bledsoe Consulting
Leah Neubauer
Northwestern University
Leslie Cooksy
Sierra Health Foundation
Lisa Dillman
Education Northwest
Maurice Samuels
MacArthur Foundation
Melvin Mark
Pennsylvania State University
Michael Morris
University of New Haven
Miri Levin-Rozalis
Ben Gurion University of the Negev and Davidson Institute at the Weizmann Institute of Science
Nick L. Smith
Syracuse University
Robin Lin Miller
Michigan State University
Rodney Hopson
Duquesne University
Stephanie Shipman
U.S. Government Accountability Office
Stewart Donaldson
Claremont Graduate University
Thomas Archibald
Thomas Archibald

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes works on all aspects of evaluation, with an emphasis on presenting timely and thoughtful reflections on leading-edge issues of evaluation theory, practice, methods, the profession, and the organizational, cultural, and societal context within which evaluation occurs. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by one or more guest editors.

The co editors-in-chief are seeking proposals for journal issues from around the globe about topics new to the journal (although topics discussed in the past can be revisited). A diversity of perspectives and creative bridges between evaluation and other disciplines, as well as chapters reporting original empirical research on evaluation, are encouraged. A wide range of topics and substantive domains are appropriate for publication, including evaluative endeavors other than program evaluation; however, the proposed topic must be of interest to a broad evaluation audience.

Journal issues may take any of several forms. Typically they are presented as a series of related chapters, but they might also be presented as a debate; an account, with critique and commentary, of an exemplary evaluation; a feature-length article followed by brief critical commentaries; or perhaps another form proposed by guest editors.

Submitted proposals must follow the format found via the Association's website at Proposals are sent to members of the journal's Editorial Advisory Board and to relevant substantive experts for single-blind peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. The journal does not consider or publish unsolicited single manuscripts.

Before submitting proposals, all parties are asked to contact the co editors-in-chief, who are committed to working constructively with potential guest editors to help them develop acceptable proposals. For additional information about the journal, see the “Statement of the Co Editors-in-Chief” in the Fall 2017, Issue 155.Revis.




Leslie A. Fierro, Co Editors-in-Chief
Assistant Clinical Professor of Evaluation
Claremont Graduate University
Division of Organizational and Behavioral Sciences



Todd M. Franke, Co Editors-in-Chief
Department of Social Welfare

Editors’ Notes

As the title suggests, this volume of New Directions for Evaluation (NDE) is focused on the use of site visits for evaluation. Although site visits are not a new methodology or method of data collection, the evaluation literature contains very little that provides guidance to professional practice outside the context of accreditation. Evaluation journals and evaluation association conferences frequently include articles and presentations highlighting evaluations that collect data through site visits. However, they do not typically address the wider implications of rigor, ethics, and quality of site visits. Government agencies, philanthropic organizations, and some professional associations have published guidelines for conducting site visits. These typically include site-visit plans, questionnaires, and report templates with a focus on site visits in specific organizational and programmatic contexts. They are usually intended for internal use within the organization and therefore are of limited utility to the broader evaluation community. In contrast, the recommendations and discussions in this volume can serve as a guide to a wider range of evaluation constituents who commission, plan, conduct, and use site visits for purposes including, but not limited to, accreditation. This volume provides an accessible compilation of recommendations based on the experiences of its authors who are evaluation commissioners, practitioners, theorists, and methodologists.

The Guiding Literature on Site Visits in Evaluation

Articles published in peer-reviewed journals on the subject of site visits as a method or methodology in evaluation include two articles by Frances Lawrenz and a book chapter and journal article by Michael Quinn Patton, both of whom are contributors to this NDE volume. Lawrenz's most recent article compared two approaches to conducting site visits for evaluating research centers (Lawrenz, Thao, & Johnson, 2012). An earlier article co-authored by Lawrenz presented evidence for describing evaluative site visits as both a method and a methodology (Lawrenz, Keiser, & Lavoie, 2003). Lawrenz's articles created a strong foundation to support further exploration and consideration of what it means to conduct high-quality site visits. Other peer-reviewed journal articles the authors found in preparing the NDE volume proposal include discussions and research on limitations of site visits due to providing advance notice of the site visit (Palackal & Shrum, 2011); comparison of the quality of data collected using site visits, photographs, and written descriptions of ecological impacts of wilderness campsites (Shelby & Harris, 1985); the acceptability of the process and results of a site-visit protocol for faculty development and identifying infrastructure needs at community-based health-care teaching sites (Malik, Bordman, Regher, & Freeman; 2007); and a review of tools for direct observation to assess the quality of after-school and youth development programs (Yohalem & Wilson-Ahlstrom, 2009). The proposed volume builds on these initial studies and addresses and elaborates the concerns Patton raised about the quality and appropriate use of site visits (Patton, 2015) and a proposed set of standards for site visits (Patton, 2014).

Advancing the Conversation About Professional Standards for Site Visits

The volume provides an overview of site visits as a methodology and recommends strategies to guide the professional practice of evaluators who conduct site visits and those who commission site visits. The volume is based on research and the experience of the authors who are evaluation theorists and evaluation practitioners. The authors provide wide-ranging perspectives on planning and conducting site visits in diverse substantive areas, geographical contexts, and for different purposes.

In this issue, we use the definition of an evaluative site visit as one that involves people with “specific expertise and preparation” who visit a site for “a limited period of time and gather information about an evaluation object … to prepare testimony addressing the purpose of the site visit” (Lawrenz, Keiser, & Lavoie, 2003). This definition includes, but is not limited to, site visits conducted for accreditation purposes. In Chapter 1, Melissa Chapman Haynes, Nora Murphy, and Michael Quinn Patton offer a guiding typology for site visits. In Chapter 2, practitioners Corey Newhouse, Denise Roseland, and Professor Frances Lawrenz discuss the nature of trade-offs and planning considerations professional evaluators may face when deciding to use evaluative site visits. Stan Capela, well known for his extensive engagement in accreditation sites visits, and Joe Frisino offer a recipe for conducting accreditation site visits in Chapter 3. In Chapter 4, Donna Podems, who has extensive involvement in site visits internationally, offers a cautionary “tale” about site visits used internationally. Randi Nelson conducted original research for Chapter 5, interviewing key informants to get their perspectives on site visits. In Chapter 6, Melissa Chapman Haynes and Ashley Johnson offer advice for training site visitors. Michael Quinn Patton reflects on the volume and proposes a revision to the standards he offered in 2015, which are paraphrased in Table 1 (Patton, 2015). This volume provides specific and experience-based recommendations for improving the quality and utility of site visits for stakeholders at multiple levels. The authors hope it will spur further conversations and research into improving the theory, practice, and use of evaluative site visits.

Table 1 A Condensed Version of Patton's 2015 Draft Standards for Site Visits

1. Competence Ensure that site-visit team members have skills and experience in qualitative observation and interviewing. Availability and subject matter expertise does not suffice.
2. Knowledge For an evaluative site visit, ensure at least one team member, preferably the team leader, has evaluation knowledge and credentials.
3. Preparation Site visitors should know something about the site being visited based on background materials, briefings, and/or prior experience.
4. Site participation People at sites should be engaged in planning and preparation for the site visit to minimize disruption to program activities and services.
5. Do no harm Site-visit stakes can be high, with risks for people and programs. Good intentions, naiveté, and general cluelessness are not excuses. Be alert to what can go wrong and commit as a team to do no harm.
6. Credible fieldwork People at the site should be involved and informed, but they should not control the information collection in ways that undermine, significantly limit, or corrupt the inquiry. The evaluators should determine the activities observed and people interviewed, and arrange confidential interviews to enhance data quality.
7. Neutrality An evaluator conducting fieldwork should not have a preformed position on the intervention or the intervention model.
8. Debriefing and feedback Before departing from the field, key people at the site should be debriefed on highlights of findings and a timeline of when (or if) they will receive an oral or written report of findings.
9. Site review Those at the site should have an opportunity to respond in a timely way to site visitors’ reports, to correct errors and provide an alternative perspective on findings and judgments. Triangulation and a balance of perspectives should be the rule.
10. Follow-up The agency commissioning the site visit should do some minimal follow-up to assess the quality of the site visit from the perspective of the locals on site.


  1. Lawrenz, F., Keiser, N., & Lavoie, B. (2003). Evaluative site visits: A methodological review. American Journal of Evaluation, 24, 341–452.
  2. Lawrenz, F., Thao, M., & Johnson, K. (2012). Expert reviews of research centers: The site visit process. Evaluation and Program Planning, 35, 390–397.
  3. Malik, R., Bordman, R., Regher, G., & Freeman, R. (2007). Continuous quality improvement and community-based faculty development through an innovative site visit program at one institution. Academic Medicine, 82, 465–468.
  4. Palackal, A., & Shrum, W. (2011). Patterns of visitation: Site visits and evaluation in developing areas. Sociological Bulletin, 60, 327–345.
  5. Patton, M. Q. (2014). Qualitative evaluation and research methods: Integrating theory and practice (3rd ed.). Thousand Oaks, CA: Sage.
  6. Patton, M. Q. (2015). Evaluation in the field: The need for site visit standards. American Journal of Evaluation, 36, 444–460.
  7. Shelby, B., & Harris, R. (1985). Comparing methods for determining visitor evaluations of ecological impacts: Site visits, photographs, and written descriptions. Journal of Leisure Research, 17, 57–67.
  8. Yohalem, N., & Wilson-Ahlstrom, A., with Fischer, S. & Shinn, M. (2009). Measuring youth program quality: A guide to assessment tools (2nd ed.). Washington, DC: The Forum for Youth Investment.

Randi K. Nelsonimages

Denise L. Roselandimages