右肋下疼痛是什么原因| 27属什么| 梦到蛇什么意思| 什么的什么是什么的伞| 11.28什么星座| 湿疹是什么原因| 生源地是指什么| 9月27日是什么星座| 压力山大什么意思| 45是什么生肖| 牙龈出血是什么病征兆| 县尉相当于现在什么官| 什么叫cta检查| 看身高挂什么科| 血浆是什么| 梦见桥塌了有什么预兆| 月子里吃什么饭最好| 牙刷属于什么垃圾| 更年期失眠吃什么药| 啤酒不能和什么一起吃| 做梦梦见棺材和死人是什么意思| 液体变固体叫什么| 牛的三合和六个合生肖是什么| 三七有什么功效| 蝉蜕有什么功效| 宝宝贫血有什么危害| 仰面朝天是什么生肖| 肚子疼看什么科| 口腔起血泡是什么原因| d cup是什么意思| 什么植物吸收甲醛| 天地人和是什么意思| 四川人为什么喜欢吃辣| 小儿疳积是什么症状| 什么是体外受精| cl是什么元素| 属马与什么属相最配| 拉尿分叉是什么原因| 什么是出柜| 产后复查都查什么| 什么情况下会猝死| neighborhood什么意思| gm墨镜是什么牌子| 南下是什么意思| 二氧化钛是什么东西| ga是什么激素| 椭圆形脸适合什么发型| 萨德事件是什么意思| 甲亢什么症状表现| 心血管堵塞吃什么药| 蚊子怕什么| 什么叫介入治疗| 路痴是什么原因造成的| 低血糖的症状是什么| 胃酸反流是什么原因造成| 骨折不能吃什么东西| 5月26是什么星座| XX是什么意思| 鸽子和什么一起炖汤最有营养| o型血的孩子父母是什么血型| 男属龙和什么属相最配| 袖珍是什么意思| rh是什么意思| 吃什么长头发又密又多| 调御丈夫是什么意思| 小孩咳嗽不能吃什么食物| 芋圆是什么| 葛根粉有什么效果| 北豆腐是什么| 366是什么意思| 氟苯尼考兽药治什么病| 入睡困难是什么原因| 吃金蝉有什么好处| 尿道炎吃什么药比较好的快| 舌苔厚口臭吃什么药好| 唇亡齿寒什么意思| 随性什么意思| 大学学什么| 三加一是什么意思| 香叶是什么树的叶子| 女金片的功效与作用是什么| 荔枝不能和什么同吃| 经期吃什么补血| 为什么会长黑痣| 狗头军师什么意思| 青榄配什么煲汤止咳做法| 什么叫高危行为| 血容量不足是什么意思| 乳腺钙化是什么意思| 静养是什么意思| 梦里梦到蛇有什么预兆| 一个口四个又念什么| 蜜蜡是什么材质| 2024年五行属什么| 岁月无痕是什么意思| 留置针是什么| 飓风什么意思| 脚气挂什么科室| 洛基是什么神| 吃什么补眼睛| 痔疮嵌顿是什么意思| 痔疮吃什么好| 艾斯挫仑是什么药| 6月18日是什么节| 茵陈是什么| gh是什么意思| 功德是什么意思| 2000年是什么生肖| 规培证有什么用| 胃火重口臭吃什么药好| 男孩叫什么名字| 梦到和老公吵架是什么意思| 骨骼清奇什么意思| 主心骨是什么意思| 什么是帽子戏法| 胃疼吃什么药好的快| 梦见母亲去世预示什么| 怀孕初期吃什么蔬菜好| 血压高有什么表现| 橄榄绿是什么颜色| 男人趴着睡觉说明什么| 什么玉便宜又养人| 发炎不能吃什么东西| lsp是什么| 十二指肠溃疡是什么原因引起的| picc什么意思| 摩羯属于什么象星座| 24节气是什么| 中医心脉受损什么意思| 红烧肉放什么调料| 特种兵是什么兵种| 舌头中间疼是什么原因| 耳鼻喉科主要看什么病| 1976年五行属什么| 小腿肌肉痛什么原因| 子宫瘢痕是什么意思| touch是什么意思| 拉屎是绿色的是什么原因| 橱窗是什么意思| 睡觉天天做梦是什么原因| 草字头下面一个高字读什么| 心阳不足吃什么中成药| 肝癌是什么| 巨石强森是什么人种| 梦见养猪是什么意思| 化疗吃什么补白细胞| camouflage什么意思| 带状疱疹一般长在什么地方| 十二月八号是什么星座| 什么大专好就业| zoom 是什么意思| 洄游是什么意思| 宝典是什么意思| 成吉思汗什么意思| 古代广东叫什么| 迪化是什么意思| 中央型肺ca是什么意思| 肛门下坠感是什么原因| 头昏应该挂什么科| 阴囊潮湿是什么原因| 冒节子是什么东西| 淳朴是什么意思| 尿味重是什么原因| 衿字五行属什么| 胸部疼痛是什么原因| 老白茶属于什么茶| 马来西亚信仰什么教| 为什么人不会飞| 五行大林木是什么意思| 慢阻肺吃什么药最有效最好| 癫痫挂什么科| Lady什么意思| 什么是牙齿根管治疗| 人的脾脏起什么作用| 大便隐血阳性是什么意思| 黄水疮用什么药膏最快| 长寿花什么时候开花| 过敏性鼻炎喝什么茶好| 阳光照耀是什么意思| 非典是什么病| 瑞五行属什么| 上头了是什么意思| 腰疼不能弯腰是什么原因引起的| 高压低压是什么意思| low什么意思| 想吐是什么原因| 仕女图是什么意思| 锌过量会引发什么症状| hcg值是什么| 送男生什么礼物| 为什么一喝阿胶睡眠就好了| 点背是什么意思| 血糖偏高吃什么食物好| 孕初期需要注意些什么| 什么是走婚| 西贝是什么| 喝什么茶降血脂| 孩子不说话挂什么科| 黄金发红是什么原因| 机票什么时候买最便宜| 起湿疹是什么原因造成的| 毛新宇什么级别| 狗咬了不能吃什么| 经常感冒发烧是什么原因| 深深是什么意思| 地主是什么意思| 夕阳无限好是什么意思| 行是什么意思| 胆汁反流是什么意思| 雪纺是什么面料| 脑梗前期有什么症状| 爽是什么结构| 淋巴细胞浸润是什么意思| 脖子淋巴结挂什么科| 1978年属什么生肖| 宫颈息肉不切除有什么危害| 口干口臭口苦吃什么药| 什么是居间费| 感冒喝什么水好得快| 反馈是什么意思| 啫啫煲为什么念jue| 胡汉三回来了什么意思| 死了是什么感觉| 夷是什么意思| 血氨低是什么原因| 什么叫白眼狼| 喝茶对身体有什么好处| 窦是什么意思| 乐哉是什么意思| 睡觉流眼泪是什么原因| 出水芙蓉是什么意思| 嘴臭是什么原因引起的| 5月2日是什么星座| 球蛋白适合什么人打| 麝是什么动物| 什么药可以消肿| 虎毒不食子是什么意思| 吃大蒜有什么好处| 女性腰酸是什么原因引起的| 神父和修女是什么关系| 奥运会五环颜色分别代表什么| 白话文是什么意思| 碧绿的什么| 小说be是什么意思| 全身浮肿是什么病| 六月六日是什么节日| bgm网络语什么意思| 心脑供血不足吃什么药效果最好| 龙虾和什么不能一起吃| mrt是什么意思| 蜘蛛侠叫什么名字| 肚子腹泻是什么原因| 贵人多忘事是什么意思| 艾地苯醌片治什么病| 软脚虾是什么意思| 炎症用什么药最好| 隔离霜和bb霜有什么区别| rf是什么意思| 打饱嗝是什么原因造成的| 栋字五行属什么| 甲状腺是什么引起的原因| 血小板高是什么问题| 释迦果吃了有什么好处| 女人安全期是什么时候| 百度Jump to content

约吗?4月化龙巷看房专车为你免费!全程私人订制

From Wikipedia, the free encyclopedia
(Redirected from Evaluate)
百度 2011年3月25日,FAST工程正式开工建设。

In common usage, evaluation is a systematic determination and assessment of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative to assess any aim, realizable concept/proposal, or any alternative, to help in decision-making; or to generate the degree of achievement or value in regard to the aim and objectives and results of any such action that has been completed.[1]

The primary purpose of evaluation, in addition to gaining insight into prior or existing initiatives, is to enable reflection and assist in the identification of future change.[2] Evaluation is often used to characterize and appraise subjects of interest in a wide range of human enterprises, including the arts, criminal justice, foundations, non-profit organizations, government, health care, and other human services. It is long term and done at the end of a period of time.

Definition

[edit]

Evaluation is the structured interpretation and giving of meaning to predicted or actual impacts of proposals or results. It looks at original objectives, and at what is either predicted or what was accomplished and how it was accomplished. So evaluation can be formative, that is taking place during the development of a concept or proposal, project or organization, with the intention of improving the value or effectiveness of the proposal, project, or organization. It can also be summative, drawing lessons from a completed action or project or an organization at a later point in time or circumstance.[3]

Evaluation is inherently a theoretically informed approach (whether explicitly or not), and consequently any particular definition of evaluation would have been tailored to its context – the theory, needs, purpose, and methodology of the evaluation process itself. Having said this, evaluation has been defined as:

  • A systematic, rigorous, and meticulous application of scientific methods to assess the design, implementation, improvement, or outcomes of a program. It is a resource-intensive process, frequently requiring resources, such as, evaluate expertise, labor, time, and a sizable budget[4]
  • "The critical assessment, in as objective a manner as possible, of the degree to which a service or its component parts fulfills stated goals" (St Leger and Wordsworth-Bell).[5][failed verification] The focus of this definition is on attaining objective knowledge, and scientifically or quantitatively measuring predetermined and external concepts.
  • "A study designed to assist some audience to assess an object's merit and worth" (Stufflebeam).[5][failed verification] In this definition the focus is on facts as well as value laden judgments of the programs outcomes and worth.

Purpose

[edit]

The main purpose of a program evaluation can be to "determine the quality of a program by formulating a judgment" Marthe Hurteau, Sylvain Houle, Stéphanie Mongiat (2009).[6] An alternative view is that "projects, evaluators, and other stakeholders (including funders) will all have potentially different ideas about how best to evaluate a project since each may have a different definition of 'merit'. The core of the problem is thus about defining what is of value."[5] From this perspective, evaluation "is a contested term", as "evaluators" use the term evaluation to describe an assessment, or investigation of a program whilst others simply understand evaluation as being synonymous with applied research.

There are two functions considering to the evaluation purpose. Formative Evaluations provide the information on improving a product or a process. Summative Evaluations provide information of short-term effectiveness or long-term impact for deciding the adoption of a product or process.[7]

Not all evaluations serve the same purpose some evaluations serve a monitoring function rather than focusing solely on measurable program outcomes or evaluation findings and a full list of types of evaluations would be difficult to compile.[5] This is because evaluation is not part of a unified theoretical framework,[8] drawing on a number of disciplines, which include management and organizational theory, policy analysis, education, sociology, social anthropology, and social change.[9]

Discussion

[edit]

However, the strict adherence to a set of methodological assumptions may make the field of evaluation more acceptable to a mainstream audience but this adherence will work towards preventing evaluators from developing new strategies for dealing with the myriad problems that programs face.[9] It is claimed that only a minority of evaluation reports are used by the evaluand (client) (Data, 2006).[6] One justification of this is that "when evaluation findings are challenged or utilization has failed, it was because stakeholders and clients found the inferences weak or the warrants unconvincing" (Fournier and Smith, 1993).[6] Some reasons for this situation may be the failure of the evaluator to establish a set of shared aims with the evaluand, or creating overly ambitious aims, as well as failing to compromise and incorporate the cultural differences of individuals and programs within the evaluation aims and process.[5] None of these problems are due to a lack of a definition of evaluation but are rather due to evaluators attempting to impose predisposed notions and definitions of evaluations on clients. The central reason for the poor utilization of evaluations is arguably[by whom?] due to the lack of tailoring of evaluations to suit the needs of the client, due to a predefined idea (or definition) of what an evaluation is rather than what the client needs are (House, 1980).[6] The development of a standard methodology for evaluation will require arriving at applicable ways of asking and stating the results of questions about ethics such as agent-principal, privacy, stakeholder definition, limited liability; and could-the-money-be-spent-more-wisely issues.

Standards

[edit]

Depending on the topic of interest, there are professional groups that review the quality and rigor of evaluation processes.

Evaluating programs and projects, regarding their value and impact within the context they are implemented, can be ethically challenging. Evaluators may encounter complex, culturally specific systems resistant to external evaluation. Furthermore, the project organization or other stakeholders may be invested in a particular evaluation outcome. Finally, evaluators themselves may encounter "conflict of interest (COI)" issues, or experience interference or pressure to present findings that support a particular assessment.

General professional codes of conduct, as determined by the employing organization, usually cover three broad aspects of behavioral standards, and include inter-collegial relations (such as respect for diversity and privacy), operational issues (due competence, documentation accuracy and appropriate use of resources), and conflicts of interest (nepotism, accepting gifts and other kinds of favoritism).[10] However, specific guidelines particular to the evaluator's role that can be utilized in the management of unique ethical challenges are required. The Joint Committee on Standards for Educational Evaluation has developed standards for program, personnel, and student evaluation. The Joint Committee standards are broken into four sections: Utility, Feasibility, Propriety, and Accuracy. Various European institutions have also prepared their own standards, more or less related to those produced by the Joint Committee. They provide guidelines about basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people, and regard for the general and public welfare.[11]

The American Evaluation Association has created a set of Guiding Principles for evaluators.[12] The order of these principles does not imply priority among them; priority will vary by situation and evaluator role. The principles run as follows:

  • Systematic inquiry: evaluators conduct systematic, data-based inquiries about whatever is being evaluated. This requires quality data collection, including a defensible choice of indicators, which lends credibility to findings.[13] Findings are credible when they are demonstrably evidence-based, reliable and valid. This also pertains to the choice of methodology employed, such that it is consistent with the aims of the evaluation and provides dependable data. Furthermore, utility of findings is critical such that the information obtained by evaluation is comprehensive and timely, and thus serves to provide maximal benefit and use to stakeholders.[10]
  • Competence: evaluators provide competent performance to stakeholders. This requires that evaluation teams comprise an appropriate combination of competencies, such that varied and appropriate expertise is available for the evaluation process, and that evaluators work within their scope of capability.[10]
  • Integrity/Honesty: evaluators ensure the honesty and integrity of the entire evaluation process. A key element of this principle is freedom from bias in evaluation and this is underscored by three principles: impartiality, independence, and transparency.

Independence is attained through ensuring independence of judgment is upheld such that evaluation conclusions are not influenced or pressured by another party, and avoidance of conflict of interest, such that the evaluator does not have a stake in a particular conclusion. Conflict of interest is at issue particularly where funding of evaluations is provided by particular bodies with a stake in conclusions of the evaluation, and this is seen as potentially compromising the independence of the evaluator. Whilst it is acknowledged that evaluators may be familiar with agencies or projects that they are required to evaluate, independence requires that they not have been involved in the planning or implementation of the project. A declaration of interest should be made where any benefits or association with project are stated. Independence of judgment is required to be maintained against any pressures brought to bear on evaluators, for example, by project funders wishing to modify evaluations such that the project appears more effective than findings can verify.[10]

Impartiality pertains to findings being a fair and thorough assessment of strengths and weaknesses of a project or program. This requires taking due input from all stakeholders involved and findings presented without bias and with a transparent, proportionate, and persuasive link between findings and recommendations. Thus evaluators are required to delimit their findings to evidence. A mechanism to ensure impartiality is external and internal review. Such review is required of significant (determined in terms of cost or sensitivity) evaluations. The review is based on quality of work and the degree to which a demonstrable link is provided between findings and recommendations.[10]

Transparency requires that stakeholders are aware of the reason for the evaluation, the criteria by which evaluation occurs and the purposes to which the findings will be applied. Access to the evaluation document should be facilitated through findings being easily readable, with clear explanations of evaluation methodologies, approaches, sources of information, and costs incurred.[10]

  • Respect for People: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.This is particularly pertinent with regards to those who will be impacted upon by the evaluation findings.[13] Protection of people includes ensuring informed consent from those involved in the evaluation, upholding confidentiality, and ensuring that the identity of those who may provide sensitive information towards the program evaluation is protected.[14] Evaluators are ethically required to respect the customs and beliefs of those who are impacted upon by the evaluation or program activities. Examples of how such respect is demonstrated is through respecting local customs e.g. dress codes, respecting peoples privacy, and minimizing demands on others' time.[10] Where stakeholders wish to place objections to evaluation findings, such a process should be facilitated through the local office of the evaluation organization, and procedures for lodging complaints or queries should be accessible and clear.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare. Access to evaluation documents by the wider public should be facilitated such that discussion and feedback is enabled.[10]

Furthermore, the international organizations such as the I.M.F. and the World Bank have independent evaluation functions. The various funds, programmes, and agencies of the United Nations has a mix of independent, semi-independent and self-evaluation functions, which have organized themselves as a system-wide UN Evaluation Group (UNEG),[13] that works together to strengthen the function, and to establish UN norms and standards for evaluation. There is also an evaluation group within the OECD-DAC, which endeavors to improve development evaluation standards.[15] The independent evaluation units of the major multinational development banks (MDBs) have also created the Evaluation Cooperation Group[16] to strengthen the use of evaluation for greater MDB effectiveness and accountability, share lessons from MDB evaluations, and promote evaluation harmonization and collaboration.

Perspectives

[edit]

The word "evaluation" has various connotations for different people, raising issues related to this process that include; what type of evaluation should be conducted; why there should be an evaluation process and how the evaluation is integrated into a program, for the purpose of gaining greater knowledge and awareness? There are also various factors inherent in the evaluation process, for example; to critically examine influences within a program that involve the gathering and analyzing of relative information about a program.

Michael Quinn Patton motivated the concept that the evaluation procedure should be directed towards:

  • Activities
  • Characteristics
  • Outcomes
  • The making of judgments on a program
  • Improving its effectiveness,
  • Informed programming decisions

Founded on another perspective of evaluation by Thomson and Hoffman in 2003, it is possible for a situation to be encountered, in which the process could not be considered advisable; for instance, in the event of a program being unpredictable, or unsound. This would include it lacking a consistent routine; or the concerned parties unable to reach an agreement regarding the purpose of the program. In addition, an influencer, or manager, refusing to incorporate relevant, important central issues within the evaluation

Approaches

[edit]

There exist several conceptually distinct ways of thinking about, designing, and conducting evaluation efforts. Many of the evaluation approaches in use today make truly unique contributions to solving important problems, while others refine existing approaches in some way.

Classification of approaches

[edit]

Two classifications of evaluation approaches by House[17] and Stufflebeam and Webster[18] can be combined into a manageable number of approaches in terms of their unique and important underlying principles.[clarification needed]

House considers all major evaluation approaches to be based on a common ideology entitled liberal democracy. Important principles of this ideology include freedom of choice, the uniqueness of the individual and empirical inquiry grounded in objectivity. He also contends that they are all based on subjectivist ethics, in which ethical conduct is based on the subjective or intuitive experience of an individual or group. One form of subjectivist ethics is utilitarian, in which "the good" is determined by what maximizes a single, explicit interpretation of happiness for society as a whole. Another form of subjectivist ethics is intuitionist/pluralist, in which no single interpretation of "the good" is assumed and such interpretations need not be explicitly stated nor justified.

These ethical positions have corresponding epistemologiesphilosophies for obtaining knowledge. The objectivist epistemology is associated with the utilitarian ethic; in general, it is used to acquire knowledge that can be externally verified (intersubjective agreement) through publicly exposed methods and data. The subjectivist epistemology is associated with the intuitionist/pluralist ethic and is used to acquire new knowledge based on existing personal knowledge, as well as experiences that are (explicit) or are not (tacit) available for public inspection. House then divides each epistemological approach into two main political perspectives. Firstly, approaches can take an elite perspective, focusing on the interests of managers and professionals; or they also can take a mass perspective, focusing on consumers and participatory approaches.

Stufflebeam and Webster place approaches into one of three groups, according to their orientation toward the role of values and ethical consideration. The political orientation promotes a positive or negative view of an object regardless of what its value actually is and might be—they call this pseudo-evaluation. The questions orientation includes approaches that might or might not provide answers specifically related to the value of an object—they call this quasi-evaluation. The values orientation includes approaches primarily intended to determine the value of an object—they call this true evaluation.

When the above concepts are considered simultaneously, fifteen evaluation approaches can be identified in terms of epistemology, major perspective (from House), and orientation.[18] Two pseudo-evaluation approaches, politically controlled and public relations studies, are represented. They are based on an objectivist epistemology from an elite perspective. Six quasi-evaluation approaches use an objectivist epistemology. Five of them—experimental research, management information systems, testing programs, objectives-based studies, and content analysis—take an elite perspective. Accountability takes a mass perspective. Seven true evaluation approaches are included. Two approaches, decision-oriented and policy studies, are based on an objectivist epistemology from an elite perspective. Consumer-oriented studies are based on an objectivist epistemology from a mass perspective. Two approaches—accreditation/certification and connoisseur studies—are based on a subjectivist epistemology from an elite perspective. Finally, adversary and client-centered studies are based on a subjectivist epistemology from a mass perspective.

Summary of approaches

[edit]

The following table is used to summarize each approach in terms of four attributes—organizer, purpose, strengths, and weaknesses. The organizer represents the main considerations or cues practitioners use to organize a study. The purpose represents the desired outcome for a study at a very general level. Strengths and weaknesses represent other attributes that should be considered when deciding whether to use the approach for a particular study. The following narrative highlights differences between approaches grouped together.

Summary of approaches for conducting evaluations
Approach Attribute
Organizer Purpose Key strengths Key weaknesses
Politically controlled Threats Get, keep or increase influence, power or money. Secure evidence advantageous to the client in a conflict. Violates the principle of full & frank disclosure.
Public relations Propaganda needs Create positive public image. Secure evidence most likely to bolster public support. Violates the principles of balanced reporting, justified conclusions, & objectivity.
Experimental research Causal relationships Determine causal relationships between variables. Strongest paradigm for determining causal relationships. Requires controlled setting, limits range of evidence, focuses primarily on results.
Management information systems Scientific efficiency Continuously supply evidence needed to fund, direct, & control programs. Gives managers detailed evidence about complex programs. Human service variables are rarely amenable to the narrow, quantitative definitions needed.
Testing programs Individual differences Compare test scores of individuals & groups to selected norms. Produces valid & reliable evidence in many performance areas. Very familiar to public. Data usually only on testee performance, overemphasizes test-taking skills, can be poor sample of what is taught or expected.
Objectives-based Objectives Relates outcomes to objectives. Common sense appeal, widely used, uses behavioral objectives & testing technologies. Leads to terminal evidence often too narrow to provide basis for judging the value of a program.
Content analysis Content of a communication Describe & draw conclusion about a communication. Allows for unobtrusive analysis of large volumes of unstructured, symbolic materials. Sample may be unrepresentative yet overwhelming in volume. Analysis design often overly simplistic for question.
Accountability Performance expectations Provide constituents with an accurate accounting of results. Popular with constituents. Aimed at improving quality of products and services. Creates unrest between practitioners & consumers. Politics often forces premature studies.
Decision-oriented Decisions Provide a knowledge & value base for making & defending decisions. Encourages use of evaluation to plan & implement needed programs. Helps justify decisions about plans & actions. Necessary collaboration between evaluator & decision-maker provides opportunity to bias results.
Policy studies Broad issues Identify and assess potential costs & benefits of competing policies. Provide general direction for broadly focused actions. Often corrupted or subverted by politically motivated actions of participants.
Consumer-oriented Generalized needs & values, effects Judge the relative merits of alternative goods & services. Independent appraisal to protect practitioners & consumers from shoddy products & services. High public credibility. Might not help practitioners do a better job. Requires credible & competent evaluators.
Accreditation / certification Standards & guidelines Determine if institutions, programs, & personnel should be approved to perform specified functions. Helps public make informed decisions about quality of organizations & qualifications of personnel. Standards & guidelines typically emphasize intrinsic criteria to the exclusion of outcome measures.
Connoisseur Critical guideposts Critically describe, appraise, & illuminate an object. Exploits highly developed expertise on subject of interest. Can inspire others to more insightful efforts. Dependent on small number of experts, making evaluation susceptible to subjectivity, bias, and corruption.
Adversary Evaluation "Hot" issues Present the pro & cons of an issue. Ensures balances presentations of represented perspectives. Can discourage cooperation, heighten animosities.
Client-centered Specific concerns & issues Foster understanding of activities & how they are valued in a given setting & from a variety of perspectives. Practitioners are helped to conduct their own evaluation. Low external credibility, susceptible to bias in favor of participants.
Note. Adapted and condensed primarily from House (1978) and Stufflebeam & Webster (1980).[18]

Pseudo-evaluation

[edit]

Politically controlled and public relations studies are based on an objectivist epistemology from an elite perspective.[clarification needed] Although both of these approaches seek to misrepresent value interpretations about an object, they function differently from each other. Information obtained through politically controlled studies is released or withheld to meet the special interests of the holder, whereas public relations information creates a positive image of an object regardless of the actual situation. Despite the application of both studies in real scenarios, neither of these approaches is acceptable evaluation practice.

Objectivist, elite, quasi-evaluation

[edit]

As a group, these five approaches represent a highly respected collection of disciplined inquiry approaches. They are considered quasi-evaluation approaches because particular studies legitimately can focus only on questions of knowledge without addressing any questions of value. Such studies are, by definition, not evaluations. These approaches can produce characterizations without producing appraisals, although specific studies can produce both. Each of these approaches serves its intended purpose well. They are discussed roughly in order of the extent to which they approach the objectivist ideal.

  • Experimental research is the best approach for determining causal relationships between variables. The potential problem with using this as an evaluation approach is that its highly controlled and stylized methodology may not be sufficiently responsive to the dynamically changing needs of most human service programs.
  • Management information systems (MISs) can give detailed information about the dynamic operations of complex programs. However, this information is restricted to readily quantifiable data usually available at regular intervals.
  • Testing programs are familiar to just about anyone who has attended school, served in the military, or worked for a large company. These programs are good at comparing individuals or groups to selected norms in a number of subject areas or to a set of standards of performance. However, they only focus on testee performance and they might not adequately sample what is taught or expected.
  • Objectives-based approaches relate outcomes to prespecified objectives, allowing judgments to be made about their level of attainment. The objectives are often not proven to be important or they focus on outcomes too narrow to provide the basis for determining the value of an object.
  • Content analysis is a quasi-evaluation approach because content analysis judgments need not be based on value statements. Instead, they can be based on knowledge. Such content analyses are not evaluations. On the other hand, when content analysis judgments are based on values, such studies are evaluations.

Objectivist, mass, quasi-evaluation

[edit]
  • Accountability is popular with constituents because it is intended to provide an accurate accounting of results that can improve the quality of products and services. However, this approach quickly can turn practitioners and consumers into adversaries when implemented in a heavy-handed fashion.

Objectivist, elite, true evaluation

[edit]
  • Decision-oriented studies are designed to provide a knowledge base for making and defending decisions. This approach usually requires the close collaboration between an evaluator and decision-maker, allowing it to be susceptible to corruption and bias.
  • Policy studies provide general guidance and direction on broad issues by identifying and assessing potential costs and benefits of competing policies. The drawback is these studies can be corrupted or subverted by the politically motivated actions of the participants.

Objectivist, mass, true evaluation

[edit]
  • Consumer-oriented studies are used to judge the relative merits of goods and services based on generalized needs and values, along with a comprehensive range of effects. However, this approach does not necessarily help practitioners improve their work, and it requires a very good and credible evaluator to do it well.

Subjectivist, elite, true evaluation

[edit]
  • Accreditation / certification programs are based on self-study and peer review of organizations, programs, and personnel. They draw on the insights, experience, and expertise of qualified individuals who use established guidelines to determine if the applicant should be approved to perform specified functions. However, unless performance-based standards are used, attributes of applicants and the processes they perform often are overemphasized in relation to measures of outcomes or effects.
  • Connoisseur studies use the highly refined skills of individuals intimately familiar with the subject of the evaluation to critically characterize and appraise it. This approach can help others see programs in a new light, but it is difficult to find a qualified and unbiased connoisseur.

Subject, mass, true evaluation

[edit]
  • The adversary approach focuses on drawing out the pros and cons of controversial issues through quasi-legal proceedings. This helps ensure a balanced presentation of different perspectives on the issues, but it is also likely to discourage later cooperation and heighten animosities between contesting parties if "winners" and "losers" emerge.

Client-centered

[edit]
  • Client-centered studies address specific concerns and issues of practitioners and other clients of the study in a particular setting. These studies help people understand the activities and values involved from a variety of perspectives. However, this responsive approach can lead to low external credibility and a favorable bias toward those who participated in the study.

Methods and techniques

[edit]

Evaluation is methodologically diverse. Methods may be qualitative or quantitative, and include case studies, survey research, statistical analysis, model building, and many more such as:

See also

[edit]

References

[edit]
  1. ^ Staff (1995–2012). "2. What Is Evaluation?". International Center for Alcohol Policies - Analysis. Balance. Partnership. International Center for Alcohol Policies. Archived from the original on 2025-08-06. Retrieved 13 May 2012.
  2. ^ Sarah del Tufo (13 March 2002). "WHAT is evaluation?". Evaluation Trust. The Evaluation Trust. Archived from the original on 30 April 2012. Retrieved 13 May 2012.
  3. ^ Michael Scriven (1967). "The methodology of evaluation". In Stake, R. E. (ed.). Curriculum evaluation. Chicago: Rand McNally. American Educational Research Association (monograph series on evaluation, no. 1.
  4. ^ Ross, P.H.; Ellipse, M.W.; Freeman, H.E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks: Sage. ISBN 978-0-7619-0894-4.
  5. ^ a b c d e Reeve, J; Paperboy, D. (2007). "Evaluating the evaluation: Understanding the utility and limitations of evaluation as a tool for organizational learning". Health Education Journal. 66 (2): 120–131. doi:10.1177/0017896907076750. S2CID 73248087.
  6. ^ a b c d Hurteau, M.; Houle, S.; Mongiat, S. (2009). "How Legitimate and Justified are Judgments in Program Evaluation?". Evaluation. 15 (3): 307–319. doi:10.1177/1356389009105883. S2CID 145812003.
  7. ^ Staff (2011). "Evaluation Purpose". designshop – lessons in effective teaching. Learning Technologies at Virginia Tech. Archived from the original on 2025-08-06. Retrieved 13 May 2012.
  8. ^ Alkin; Ellett (1990). not given. p. 454.
  9. ^ a b Potter, C. (2006). "Psychology and the art of program evaluation". South African Journal of Psychology. 36 (1): 82HGGFGYR–102. doi:10.1177/008124630603600106. S2CID 145698028.
  10. ^ a b c d e f g h David Todd (2007). GEF Evaluation Office Ethical Guidelines (PDF). Washington, DC, United States: Global Environment Facility Evaluation Office. Archived from the original (PDF) on 2025-08-06. Retrieved 2025-08-06.
  11. ^ Staff (2012). "News and Events". Joint Committee on Standards for Educational Evaluation. Archived from the original on October 15, 2009. Retrieved 13 May 2012.
  12. ^ Staff (July 2004). "AMERICAN EVALUATION ASSOCIATION GUIDING PRINCIPLES FOR EVALUATORS". American Evaluation Association. Archived from the original on 29 April 2012. Retrieved 13 May 2012.
  13. ^ a b c Staff (2012). "UNEG Home". United Nations Evaluation Group. Archived from the original on 13 May 2012. Retrieved 13 May 2012.
  14. ^ World Bank Institute (2007). "Monitoring & Evaluation for Results Evaluation Ethics What to expect from your evaluators" (PDF). World Bank Institute. The World Bank Group. Archived (PDF) from the original on 1 November 2012. Retrieved 13 May 2012.
  15. ^ Staff. "DAC Network On Development Evaluation". OECD - Better Policies For Better Lives. OECD. Archived from the original on 2 June 2012. Retrieved 13 May 2012.
  16. ^ Staff. "Evaluation Cooperation Group". Evaluation Cooperation Group website. ECG. Archived from the original on 13 June 2006. Retrieved 31 May 2013.
  17. ^ House, E. R. (1978). Assumptions underlying evaluation models. Educational Researcher. 7(3), 4-12.
  18. ^ a b c Stufflebeam, D. L., & Webster, W. J. (1980). "An analysis of alternative approaches to evaluation" Archived 2025-08-06 at the Wayback Machine. Educational Evaluation and Policy Analysis. 2(3), 5-19. OCLC 482457112
[edit]
重庆有什么烟 么么哒是什么意思 足金是什么意思 胃胀气适合吃什么食物 玩是什么意思
静脉曲张是什么原因引起的 淼读什么字 脚心痒是什么原因引起的 1977年是什么命 腹肌不对称是什么原因
鸿字五行属什么 pof是什么意思 左甲状腺是什么病 狐惑病是什么病 吃香蕉有什么好处
什么大牌护肤品好用 电解质饮料有什么作用 妈妈的妹妹应该叫什么 上午九点到十一点是什么时辰 9月什么星座
abr是什么检查hcv7jop6ns0r.cn 做梦房子倒塌什么预兆hcv7jop7ns2r.cn 吃鱼眼睛有什么好处hcv9jop1ns8r.cn 常字五行属什么hcv9jop0ns9r.cn 咽炎吃什么药最好效果hcv9jop1ns4r.cn
神父和修女是什么关系hcv7jop7ns2r.cn 苦荞茶适合什么人喝hcv9jop1ns1r.cn 长相厮守是什么意思cj623037.com 大排畸是什么检查hcv8jop6ns0r.cn 吃苹果有什么好处jasonfriends.com
伤官女是什么意思hcv9jop8ns0r.cn 凝视是什么意思hcv9jop1ns3r.cn 已故是什么意思hcv9jop1ns6r.cn 女性排卵有什么症状或感觉hcv7jop6ns5r.cn 进口二甲双胍叫什么hcv8jop0ns9r.cn
孕妇什么时候有奶水cj623037.com 吃什么降血压最快最好方法hcv8jop4ns9r.cn 宝宝惊跳反射什么时候消失hcv9jop6ns8r.cn 什么都不想做hcv8jop6ns9r.cn 芸豆长什么样子hcv7jop5ns0r.cn
百度