Conduct a program evaluation within office of alumni affairs


Assignment:

As you continue in your role as an evaluator conducting a program evaluation within the Office of Alumni Affairs, note that the focus for your evaluation is now defined as this program goal or activity: improvement of the overall pledge rate.

In your initial post, discuss what you would list as the key data sources and the kinds of data that will be needed to analyze this goal or activity. Justify why you think this data would be needed

Module Overview

"Data gathering focuses on collecting information that conveys a . . . picture of the . . . program and can be seen as credible by stakeholders. Data gathering includes consideration about what indicators, data sources and methods to use, the quality and quantity of the information . . . and the context in which the data gathering occurs. "

-Instructional Assessment Resources, 2011

Sources of evidence in an evaluation may be people, documents, or interactive or unobtrusive observations. More than one source may be used to gather evidence. In fact, selecting multiple sources provides an opportunity to include different perspectives about the program and enhances the credibility of the program evaluation. For example, an inside perspective may be reflected by internal documents and comments from staff or department managers, whereas clients and those who do not directly support the program may provide different but equally relevant perspectives. Mixing these and other perspectives can provide a more comprehensive view of the program being evaluated.

Much has been written and is available regarding data collection methods. "It is important to recognize that while there are similarities between research and [program] evaluation, there are as many differences" (Glenaffric Ltd., 2007, p. 13). Some data are already available, and you just need to find it and analyze it; some data you may need to specifically collect. Several interactive and unobtrusive data collection methods are among your choices, including qualitative and quantitative document reviews, focus groups, interviews, observations, questionnaires, surveys, and sampling. Ultimately, the choice of methods should be influenced by a number of factors, including the availability and experience of evaluation staff to gather data and analyze results; time; availability of and access to data; and the type of program and its context (Glenaffric Ltd., 2007, p. 13).

The process of justifying data sources and subsequent conclusions recognizes that evidence in an evaluation does not necessarily speak for itself. Evidence must be carefully considered from a number of different stakeholders' perspectives to reach conclusions that are well-substantiated and justified. Conclusions become justified when they are linked to the evidence gathered and judged against agreed-upon values set by the stakeholders. Stakeholders must agree that conclusions are justified in order to use the evaluation results with confidence. (Milstein &Wetterhall, 2014)

The principle elements involved in justifying conclusions based on evidence are (Adapted from Milstein &Wetterhall, 2014):

• Standards reflect the values held by stakeholders about the program. They provide the basis to make program judgments. The use of explicit standards for judgment is fundamental to sound evaluation.

• Analysis and synthesis are methods to discover and summarize an evaluation's findings. They are designed to detect patterns in evidence, either by isolating important findings (analysis) or by combining different sources of information to reach a larger understanding (synthesis).

• Interpretation is the effort in determining what the findings mean. Uncovering facts about a program's performance isn't enough to make conclusions. The facts must be interpreted to understand their practical significance. In short, interpretations draw on information and perspectives that stakeholders bring to the evaluation. They can be strengthened through active participation or interaction with the data and preliminary explanations of what happened.

• Judgments are statements about the merit, worth, or significance of the program. They are formed by comparing the findings and their interpretations against one or more selected standards. Because multiple standards can be applied to a given program, stakeholders may reach different or even conflicting judgments. For instance, a program that increases its outreach by 10% from the previous year may be judged positively by program managers, based on standards of improved performance over time. Community members, however, may feel that despite improvements, a minimum threshold of access to services has still not been reached.

Their judgment, based on standards of social equity, would therefore be negative. Conflicting claims about a program's quality, value, or importance often indicate that stakeholders are using different standards or values in making judgments. This type of disagreement can be a catalyst to clarify values and to negotiate the appropriate basis (or bases) on which the program should be judged.

References

Glenaffric Ltd. (2007). Six steps to effective evaluation: Analyze results.

Instructional Assessment Resources. (2011). Evaluate programs: Program evaluation process. The University of Texas at Austin.

Milstein, B., &Wetterhall, S. (2014). A framework for program evaluation: A gateway to tools. Community Tool Box.

Solution Preview :

Prepared by a verified Expert
Business Law and Ethics: Conduct a program evaluation within office of alumni affairs
Reference No:- TGS02997171

Now Priced at $25 (50% Discount)

Recommended (99%)

Rated (4.3/5)