Pollution and Health: A Global Public Health Crisis
Découvrez notre rapport principal[TOR] Evaluation of the Environmental Pollution Programme in Vietnam 2025 - 2026
Evaluation of the GAHP-DEFRA project "Reducing health and environmental impacts of pesticides and agricultural open burning in Vietnam”
Section 1. PROJECT BACKGROUND AND OVERVIEW
1. Project Description
Agricultural practices in Vietnam, particularly the use of agrochemicals and the practice of open burning, pose significant challenges to the environment and climate. This project addresses these issues by conducting comprehensive research and developing sustainable alternatives to mitigate their negative impacts.
Vietnam's agriculture relies heavily on agrochemicals, and the open burning of agricultural residues is also a common practice. These activities contribute to environmental pollution, biodiversity loss, and climate change. Black carbon emissions from open burning play a great role as climate-forcing pollutants. Additionally, agricultural effluents are identified as the primary driver of climate-linked biodiversity loss, with air pollution being another significant contributor.
The project includes 3 parts: “Air Pollution and Agricultural Open Burning,” “Integrated Pest Management (IPM) Expansion,” and “Development of an integrated and inclusive approach to effective waste management in Da Nang through business and community engagement.” The project has a typical structure for such initiatives, observed by the evaluator in most projects of a similar purpose.
2. Executing Arrangements
The project is implemented by Global Alliance on Health and Pollution (GAHP) in the Vietnam Association for Conservation of Nature and Environment (VACNE), Rainforest Alliance (RA), and the Center for Education and Development (CED). The project work is supervised bythe Department for Environment, Food & Rural Affairs of the United Kingdom Government (DEFRA).
Section 2. OBJECTIVE AND SCOPE OF THE EVALUATION
3. Key Evaluation Principles
Evaluation findings and judgements should be based on sound evidence and analysis, clearly documented in the evaluation report. Information will be triangulated (i.e., verified from different sources) as far as possible, and when verification is not possible, the single source will be mentioned (whilst anonymity is still protected). Analysis leading to evaluative judgements should always be clearly spelled out.
The “Why?” Question. As this is a terminal evaluation, particular attention should be given to learning from the experience. Therefore, the “Why?” question should be at the front of the consultants’ minds all through the evaluation exercise and is supported by the use of a theory of change approach. This means that the consultants need to go beyond the assessment of “what” the project performance was, and make a serious effort to provide a deeper understanding of “why” the performance was as it was. This should provide the basis for the lessons that can be drawn from the project.
Baselines and counterfactuals. In attempting to attribute any outcomes and impacts to the project intervention, the evaluators should consider the difference between what has happened with, and what would have happened without, the project. This implies that there should be consideration of the baseline conditions, trends, and counterfactuals in relation to the intended project outcomes and impacts. It also means that there should be plausible evidence to attribute such outcomes and impacts to the actions of the project. Sometimes, adequate information on baseline conditions, trends or counterfactuals is lacking. In such cases, this should be clearly highlighted by the evaluators, along with any simplifying assumptions that were taken to enable the evaluator to make informed judgements about project performance.
Communicating evaluation results. A key aim of the evaluation is to encourage reflection and learning by DEFRA and GAHP staff, key project partners, and stakeholders. The consultant should consider how reflection and learning can be promoted, both through the evaluation process and in the communication of evaluation findings and key lessons. Clear and concise writing is required on all evaluation deliverables. Draft and final versions of the main evaluation report will be shared with key stakeholders.
4. Objective of the Evaluation
The Terminal Evaluation (TE) is undertaken at completion of the project to assess project performance (in terms of relevance, effectiveness, and efficiency), and determine outcomes and impacts (actual and potential) stemming from the project, including their sustainability. The evaluation has two primary purposes: (i) to provide evidence of results to meet accountability requirements, and (ii) to promote operational improvement, learning, and knowledge sharing through results and lessons learned among GAHP, DEFRA, VACNE, RA, and other partners. Therefore, the evaluation will identify lessons of operational relevance for follow-up projects or other similar initiatives.
5. Evaluation Criteria
All evaluation criteria will be rated on a six-point scale. Sections A-I below outline the scope of the criteria, and a link to a table for recording the ratings is provided in Annex 1. A weightings table will be provided in Excel format (link provided in Annex 1) to support the determination of an overall project rating. The set of evaluation criteria are grouped in nine categories: (A) Strategic Relevance; (B) Quality of Project Design; (C) Nature of External Context; (D) Effectiveness, which comprises assessments of the achievement of outputs, achievement of outcomes and likelihood of impact; (E) Financial Management; (F) Efficiency; (G) Monitoring and Reporting; (H) Sustainability; and (I) Factors Affecting Project Performance. The evaluation consultants can propose other evaluation criteria as deemed appropriate.
A. Strategic Relevance
The evaluation will assess ‘the extent to which the activity is suited to the priorities and policies of the target group, recipient and donor’. This criterion comprises three elements:
Factors affecting this criterion may include: stakeholders’ participation and cooperation; responsiveness to human rights and gender equity; and country ownership and driven-ness.
B. Quality of Project Design
The quality of project design is assessed using an agreed template during the evaluation inception phase, ratings are attributed to identified criteria and an overall Project Design Quality rating is established. This overall Project Design Quality rating is entered in the final evaluation ratings table as item B. In the Main Evaluation Report, a summary of the project’s strengths and weaknesses at the design stage is included.
Factors affecting this criterion may include (at the design stage): stakeholders' participation and cooperation, and responsiveness to human rights and gender equity, including the extent to which relevant actions are adequately budgeted for.
C. Nature of External Context
At the evaluation inception stage, a rating is established for the project’s external operating context (considering the prevalence of conflict, natural disasters, and political upheaval). This rating is entered in the final evaluation ratings table as item C. Where a project has been rated as facing either an Unfavourable or Highly Unfavourable and unexpected external operating context, the overall rating for Effectiveness may be increased at the discretion of the Evaluation Consultant and Evaluation Manager together. A justification for such an increase must be given.
D. Effectiveness
The evaluation will assess effectiveness across three dimensions: achievement of outputs, achievement of direct outcomes, and likelihood of impact.
Achievement of Outputs
The evaluation will assess the project’s success in producing the programmed outputs (products and services delivered by the project itself) and achieving milestones as per the project work plan. Any formal modifications/revisions made during project implementation will be considered part of the project design. Where the project outputs are inappropriately or inaccurately stated in the work plan, a table should be provided showing the original formulation and the amended version for transparency. The achievement of outputs will be assessed in terms of both quantity and quality, and the assessment will consider their usefulness and the timeliness of their delivery. The evaluation will briefly explain the reasons behind the success or shortcomings of the project in delivering its programmed outputs and meeting expected quality standards.
Factors affecting this criterion may include: preparation and readiness, and quality of project management and supervision.
Achievement of Direct Outcomes
The achievement of direct outcomes is assessed as performance against the direct outcomes.
Factors affecting this criterion may include: quality of project management and supervision; stakeholders’ participation and cooperation; responsiveness to human rights and gender equity; and communication and public awareness.
Likelihood of Impact
The evaluation will assess the likelihood of the intended, positive impacts becoming a reality. The evaluation will also consider the likelihood that the intervention may lead, or contribute to, unintended negative effects. Some of these potential negative effects may have been identified in the project design as risks.
The likelihood of impact could be evaluated through a theory of change reconstructed for evaluation purposes.
Factors affecting this criterion may include: quality of project management and supervision, including adaptive project management; stakeholders’ participation and cooperation; responsiveness to human rights and gender equity; country ownership and driven-ness; and communication and public awareness.
E. Financial Management
Financial management will be assessed under three broad themes: completeness of financial information, communication between financial and project management staff, and compliance with relevant UN financial management standards and procedures. The evaluation will establish the actual spend across the life of the project of funds secured from all donors. This expenditure will be reported, where possible, at the output level and will be compared with the approved budget. The evaluation will also assess to what extent any project delays could have been avoided through stronger project management and identify any negative impacts caused by project delays. The evaluation will verify the application of proper financial management standards and adherence to GAHP financial management policies. Any financial management issues that have affected the timely delivery of the project or the quality of its performance will be highlighted.
Factors affecting this criterion may include: preparation and readiness, and quality of project management and supervision.
F. Efficiency
The evaluation will assess the cost-effectiveness and timeliness of project execution. Focusing on the translation of inputs into outputs, cost-effectiveness is the extent to which an intervention has achieved, or is expected to achieve, its results at the lowest possible cost. Timeliness refers to whether planned activities were delivered according to expected timeframes, as well as whether events were sequenced efficiently. The evaluation will also assess to what extent any project extension could have been avoided through stronger project management and identify any negative impacts caused by project delays or extensions. The evaluation will describe any cost or time-saving measures put in place to maximise results within the secured budget and agreed project timeframe, and consider whether the project was implemented in the most efficient way compared to alternative interventions or approaches.
The evaluation will give special attention to efforts by the project teams to make use of/build upon pre-existing institutions, agreements and partnerships, data sources, synergies and complementarities with other initiatives, programmes and projects, etc., to increase project efficiency. The evaluation will also consider the extent to which the management of the project minimised the GAHP environmental footprint.
Factors affecting this criterion may include: preparation and readiness (e.ge, timeliness); quality of project management and supervision; and stakeholders' participation and cooperation.
G. Monitoring and Reporting
The evaluation will assess monitoring and reporting across three sub-categories: monitoring design and budgeting, monitoring implementation, and project reporting.
a) Monitoring Design and Budgeting
Each project should be supported by a sound monitoring plan that is designed to track progress against SMART[1] indicators towards the achievement of the project's outputs and direct outcomes, including at a level disaggregated by gender or groups with low representation. The evaluation will assess the quality of the design of the monitoring plan as well as the funds allocated for its implementation. The adequacy of resources for mid-term and terminal evaluation/review should be discussed if applicable.
b) Monitoring Implementation
The evaluation will assess whether the monitoring system was operational and facilitated the timely tracking of results and progress towards project objectives throughout the project implementation period. It will also consider how information generated by the monitoring system during project implementation was used to adapt and improve project execution, achievement of outcomes, and ensure sustainability. The evaluation should confirm that funds allocated for monitoring were used to support this activity.
c)Project Reporting
GAHP prepared monthly updates for DEFRA. This information will be provided to the Evaluation Consultant(s) by the Evaluation Manager.
Factors affecting this criterion may include: quality of project management and supervision, and responsiveness to human rights and gender equity (e.g. disaggregated indicators and data).
H. Sustainability
Sustainability is understood as the probability of direct outcomes being maintained and developed after the close of the intervention. The evaluation will identify and assess the key conditions or factors that are likely to undermine or contribute to the persistence of achieved direct outcomes. Some factors of sustainability may be embedded in the project design and implementation approaches, while others may be contextual circumstances or conditions that evolve over the life of the intervention. Where applicable, an assessment of bio-physical factors that may affect the sustainability of direct outcomes may also be included.
a) Socio-political Sustainability
The evaluation will assess the extent to which social or political factors support the continuation and further development of project direct outcomes. It will consider the level of ownership, interest, and commitment among the government and other stakeholders to take the project achievements forward. In particular, the evaluation will consider whether individual capacity development efforts are likely to be sustained.
b) Financial Sustainability
Some direct outcomes, once achieved, do not require further financial inputs, e.g., the adoption of a revised policy. However, in order to derive a benefit from this outcome, further management action may still be needed, e.g., to undertake actions to enforce the policy. Other direct outcomes may be dependent on a continuous flow of action that needs to be resourced for them to be maintained, e.g., continuation of a new resource management approach. The evaluation will assess the extent to which project outcomes are dependent on future funding for the benefits they bring to be sustained. Secured future funding is only relevant to financial sustainability where the direct outcomes of a project have been extended into a future project phase. The question still remains as to whether the future project outcomes will be financially sustainable.
c) Institutional Sustainability
The evaluation will assess the extent to which the sustainability of project outcomes is dependent on issues relating to institutional frameworks and governance. It will consider whether institutional achievements such as governance structures and processes, policies, sub-regional agreements, legal and accountability frameworks, etc., are robust enough to continue delivering the benefits associated with the project outcomes after project closure.
Factors affecting this criterion may include: stakeholders’ participation and cooperation; responsiveness to human rights and gender equity (e.g., where interventions are not inclusive, their sustainability may be undermined); communication and public awareness; and country ownership and driven-ness.
I. Factors and Processes Affecting Project Performance
(These factors are rated in the ratings table, but are discussed as cross-cutting themes as appropriate under the other evaluation criteria, above.)
1. Preparation and Readiness
This criterion focuses on the inception or mobilisation stage of the project. The evaluation will assess whether appropriate measures were taken to either address weaknesses in the project design or respond to changes that took place between project approval, the securing of funds, and project mobilisation. In particular, the evaluation will consider the nature and quality of engagement with stakeholder groups by the project team, the confirmation of partner capacity and development of partnership agreements, as well as initial staffing and financing arrangements. (Project preparation is covered in the template for the assessment of Project Design Quality).
2. Quality of Project Management and Supervision
In some cases, ‘project management and supervision’ will refer to the supervision and guidance provided by GAHP to implementing partners.
The evaluation will assess the effectiveness of project management with regard to: providing leadership towards achieving the planned outcomes; managing team structures; maintaining productive partner relationships (including Steering Groups, etc.); communication and collaboration with GAHP colleagues; risk management; use of problem-solving; project adaptation and overall project execution. Evidence of adaptive project management should be highlighted.
3. Stakeholder Participation and Cooperation
Here, the term ‘stakeholder’ should be considered in a broad sense, encompassing all project partners, duty bearers with a role in delivering project outputs, target users of project outputs, and any other collaborating agents external to GAHP. The assessment will consider the quality and effectiveness of all forms of communication and consultation with stakeholders throughout the project life and the support given to maximise collaboration and coherence between various stakeholders, including sharing plans, pooling resources, and exchanging learning and expertise. The inclusion and participation of all differentiated groups, including gender groups, should be considered.
4. Responsiveness to Human Rights and Gender Equity
The evaluation will ascertain to what extent the project has applied the UN Common Understanding on the human rights-based approach (HRBA) and the UN Declaration on the Rights of Indigenous Peoples. Within this human rights context, the evaluation will assess to what extent the intervention adheres to the GAHP Policy and Strategy for Gender Equality and the Environment.
The report should present the extent to which the intervention, following an adequate gender analysis at the design stage, has implemented the identified actions and/or applied adaptive management to ensure that Gender Equity and Human Rights are adequately taken into account. In particular, the evaluation will consider to what extent project design (section B), the implementation that underpins effectiveness (section D), and monitoring (section G) have taken into consideration: (i) possible gender inequalities in access to and the control over natural resources; (ii) specific vulnerabilities of women and children to environmental degradation or disasters; (iii) the role of women in mitigating or adapting to environmental changes and engaging in environmental protection and rehabilitation.
5. Country Ownership and Driven-ness
The evaluation will assess the quality and degree of engagement of government / public sector agencies in the project. The evaluation will consider the involvement not only of those directly involved in project execution and those participating in technical or leadership groups, but also those official representatives whose cooperation is needed for change to be embedded in their respective institutions and offices. This factor is concerned with the level of ownership generated by the project over outputs and outcomes, and that is necessary for long-term impact to be realised. This ownership should adequately represent the needs and interests of all genders and marginalised groups.
6. Communication and Public Awareness
46. The evaluation will assess the effectiveness of: a) communication of learning and experience sharing between project partners and interested groups arising from the project during its life, and b) public awareness activities that were undertaken during the implementation of the project to influence attitudes or shape behaviour among wider communities and civil society at large. The evaluation should consider whether existing communication channels and networks were used effectively, including meeting the differentiated needs of gender and marginalised groups, and whether any feedback channels were established. Where knowledge sharing platforms have been established under a project, the evaluation will comment on the sustainability of the communication channel under either socio-political, institutional, or financial sustainability, as appropriate.
Section 3. EVALUATION APPROACH, METHODS, AND DELIVERABLES
The Terminal Evaluation will be a desk review evaluation supplemented by interviews of members of the project team, representatives of selected key stakeholders, and other methods selected by the evaluation team. Both quantitative and qualitative evaluation methods will be used as appropriate to determine project achievements against the expected outputs, outcomes, and impacts.
6. Evaluation Deliverables and Review Procedures
The evaluation team will prepare[2]: Draft and Final Evaluation Report: containing an executive summary that can act as a stand-alone document; detailed analysis of the evaluation findings organised by evaluation criteria and supported with evidence; lessons learned and recommendations, and an annotated ratings table.
Review of the draft evaluation report. The evaluation team will submit a draft report to the Evaluation Manager and revise the draft in response to their comments and suggestions. Once a draft of adequate quality has been peer-reviewed and accepted, the Evaluation Manager will share the cleared draft report with the Project Manager, who will alert the Evaluation Manager in case the report contains any blatant factual errors. The Evaluation Manager will then forward the revised draft report (corrected by the evaluation team where necessary) to other project stakeholders for their review and comments. Stakeholders may provide feedback on any errors of fact and may highlight the significance of such errors in any conclusions, as well as provide feedback on the proposed recommendations and lessons. Any comments or responses to draft reports will be sent to the Evaluation Manager for consolidation. The Evaluation Manager will provide all comments to the evaluation team for consideration in preparing the final report, along with guidance on areas of contradiction or issues requiring an institutional response.
Based on a careful review of the evidence collated by the evaluation consultants and the internal consistency of the report, the Evaluation Manager will provide an assessment of the ratings in the final evaluation report. Where there are differences of opinion between the evaluator and the Evaluation Manager on project ratings, both viewpoints will be clearly presented in the final report.
7. The Consultants’ Team
51. For this evaluation, the evaluation team will consist of a Team Leader and one Supporting Consultant who will work with the Evaluation Manager. The consultants will liaise with the Evaluation Manager on any procedural and methodological matters related to the evaluation. The project team will, where possible, provide support (introductions, meetings, etc.), allowing the consultants to conduct the evaluation as efficiently and independently as possible.
The Lead consultant will be hired over the period 1st of November, 2024 to 20th of March, 2025 and should have: an advanced university degree in environmental sciences, international development or other relevant political or social sciences area; a minimum of 10 years of technical / evaluation experience, including of evaluating national/regional programmes; a broad understanding of Persistent Organic Pollutants (POPs); excellent writing skills in English; team leadership experience.
The Supporting Technical Consultant will be hired over the period of 1st of November, 2024 to 20th of March, 2025 and should have: an university degree in environmental sciences, international development or other relevant political or social sciences area; a minimum of 10 years of technical/research experience; good level understanding of Persistent Organic Pollutants (POPs), pesticides management, CSOs operation and capacity building, and knowledge of cultural background of Central Asia; sufficient writing skills in English. The Supporting consultant will provide expert support to the lead consultant.
The Team Leader will be responsible for overall management of the evaluation and timely delivery of its outputs. The Supporting Consultant will make substantive and high-quality contributions to the evaluation process and outputs. Both consultants will ensure together that all evaluation criteria and questions are adequately covered.
8. Schedule of the evaluation
Deadline:
Contracting procedures: November 1, 2024
Inception Meeting (Zoom): November 10, 2024
Draft report to Evaluation Manager: January 15, 2025
Draft Report shared with the project team: January 25
Draft Report shared with a wider group of stakeholders: January 30
Final Report: February 20
[1] SMART refers to indicators that are specific, measurable, assignable, realistic, and time-specific.
[2] Evaluation Manager can provide samples as required