Below are answers to some of the common questions about the Measurement Library. For other questions and information, contact email@example.com
- What is the Measurement Library?
- Why is a resource like this Measurement Library necessary?
- Who is this Measurement Library intended for? What types of professionals might find it useful?
- How were the organizations involved in putting together content for the first phase of the Measurement Library selected for this project?
- What is the INEE Measurement Library Reference Group?
- I’d like to select a measure to use. How do I get started?
- What if the measures in the Library don’t capture what I/my organization want to assess?
- What do we mean when we say, “high-quality data”?
- How can I determine whether data that I collect is high-quality?
- What if I want to adapt the measures and guidance materials?
- There are measures out there that say they can be used in any context and produce reliable, valid data. Why not just use one of those?
- Why aren’t more measures in the Measurement Library “ready for purpose”?
- Why are measures that are “proceed for purpose with caution” and “under development” listed in the MENAT Measurement Library?
What is the Measurement Library?
The Measurement Library provides stakeholders with tools to collect data that will help them strengthen education and protection of children and youth in crisis- and conflict-affected contexts. The Library contains tools that can be used to collect data on the quality of service provision and children's holistic development outcomes. These tools include measures and information on their psychometric properties, training and guidance materials.
Why is a resource like this Measurement Library necessary?
A longstanding challenge for those working to educate children and youth in crisis contexts has been a lack of data and evidence to guide decision-making. This knowledge gap includes data and evidence about whether children are learning, what skills individual children have or have not mastered, the need for targeted educational and mental health supports and whether programs are working for all intended beneficiaries.
Without this knowledge, researchers, practitioners and funders are often relying on intuition and instinct rather than facts when deciding how to design and fund programs meant to serve children affected by conflict and crisis. The Library will help those working with children to collect and interpret data that can inform design, policy and funding decisions.
Who is this Measurement Library intended for? What types of professionals might find it useful?
At each stage, the Measurement Library is and will be intended for anyone working on or interested in the education, protection, health, safety and well-being of children and youth in emergency situations. The Library’s first measures focused on the MENAT region, but is now useful for education researchers and practitioners globally. Practitioners will find it helpful as they select, adapt and administer measures to be used as they monitor and evaluate their programs, provide feedback to children and service providers and identify children in need of specific services. For researchers, the first phase of the Library will help them better determine the quality of the data that results from the use of the measures and understand what measures are available or relevant as they pursue further data collection.
Practitioners, policymakers and researchers may use the Library to see the types and breadths of measures currently available and help them identify which measures could be adapted for use in their own contexts. Donors engaged or interested in engaging with education in crisis contexts may use the library to see what measures can and cannot be used for and get a deeper sense of what collecting and analyzing high-quality data entails. For donors, the Library will inform their decision-making about investments in measurement, data analysis and data use.
How were the organizations involved in putting together content for the first phase of the Measurement Library selected for this project?
The Library’s content in this initial phase was selected and provided by the Evidence to Action: Education in Emergencies (3EA) MENAT Consortium.
What is the INEE Measurement Library Reference Group?
The INEE Measurement Library Reference Group (MLRG) supports the continued building of the library by providing technical knowledge and input during various steps of the project. Specifically, the MLRG encourages submission of and review (1) additional measurement tools and user-centred guidance materials to be added to the Measurement Library, as well as (2) informational materials provided to end-users on measure purpose and level of reliability and validity.
The 24 members of the MLRG come from 20 different organizations/ institutions and 16 different countries.
I’d like to select a measure to use. How do I get started?
Before selecting what other tools to use in the Library, we strongly encourage all users to consult the Measure Guidance. This resource will not only help you determine which tools are best suited to the assessment you are pursuing; it may also help you better understand the other questions you ought to be asking yourself when it comes to program design and evaluation, screening and diagnostic processes and curriculum development.
What if the measures in the Library don’t capture what I/my organization want to assess?
There is a chance that the Measurement Library won’t contain the precise tools you need. If this occurs, we encourage you to reach out to the Library’s designers at firstname.lastname@example.org directly and let us know about what information you’re seeking and the ways the Library could better suit your organization’s needs. While we cannot guarantee that we will then be able to adequately address your concerns, hearing from other individuals and organizations working in this space is always helpful and holds the potential to improve future iterations of the Measurement Library.
What do we mean when we say, “high-quality data”?
“High-quality” data refers to data that is complete, consistent, valid, timely, verified and accurate.
How can I determine whether data that I collect is high-quality?
Before using the Measurement Library’s tools, be sure to review the Library’s evidence on the validity and reliability of each tool, the information about the testing context and purpose on each measure’s landing page, and the training materials. Following the guidance provided will in turn help ensure that the data you collect is reliable and valid, which is a key component of data that is “high-quality.” High-quality data will also contain a minimal amount of missing values and will be presented in a clear, coherent fashion.
What if I want to adapt the measures and guidance materials?
The Measurement Library’s materials have been carefully selected and vetted to ensure they are as useful and effective as possible. While we can only vouch for the quality of the tools as they are presented, we appreciate that you may also benefit from adapting them for your own purposes. However, any use of a measure included in the Library that deviates from the use clearly specified in the measure’s landing page requires the express permission of the measure’s developer.
For adapting measures and guidance materials, please consult the Measure Guidance.
There are measures out there that say they can be used in any context and produce reliable, valid data. Why not just use one of those?
While caregivers, teachers and policymakers across contexts may broadly agree on the skills and competencies critical for children’s long-term success, how such skills are named, defined, manifested, operationalized and/or prioritized differs according to the context. For that reason, we recommend that measures are always at minimum adapted for use in a new context. The measure should also be retested to evaluate the evidence for use in a new situation and context, and the results of the adaptation shared back to promote shared learning.
Why aren’t more measures in the Measurement Library “ready for purpose”?
The Measurement Consortium’s testing of measures produced varied results with having sufficient evidence of validity and reliability to be designated as “ready for purpose”. There are two main reasons for this. First, strong measures are not instantly created. They are developed and refined over time as evidence accumulates with new trials, in different contexts or for distinct purposes. Identifying stronger measures is like trying to identify what programs work best for children. It's hard to draw broad conclusions from one or two evaluations of programs in different contexts; dozens of trials are needed to have confidence that the program is really working and achieving what is intended.
The same is true for measures. Those in the Measurement Library that are “ready for purpose” were further along in the process of testing and iteration, having already gone through extensive rounds of revisions prior. Others were at a more nascent stage when our group convened.
In addition, we have set a high bar for designating measures as “ready for purpose”: to meet this standard a measure must have good to excellent evidence of multiple types of reliability and validity when tested in a given context. Moreover, we have to have confidence in the stability of the results or that the results could be replicated if tested with the same sample again.
Read more about the Measurement Library Measure Review Criteria.
Why are measures that are “proceed for purpose with caution” and “under development” listed in the MENAT Measurement Library?
It is crucial that information generated by measures in the Library meets a high standard of reliability and validity. It is also equally important to be candid and transparent about measures that thus far have mixed or little evidence of reliability and validity for the situation in which it was tested. Ultimately, even when the results are not as straightforward as we had hoped, we can learn important things about the measure we are testing and how to revise it for the next iteration. We see this gain in knowledge as an opportunity, not a loss.
At the same time, we recognize that there is tremendous risk in using measures that provide inaccurate information. This risk underscores the importance of transparency; if we refuse to over-sell our progress or wave away inconvenient facts learned along the way, we ultimately protect the credibility of this initiative and help ensure that time and resources are not wasted going down any unproductive roads.