Measurement Library


Below are answers to some of the common questions about the Measurement Library. For other questions and information, contact

  1. What is the Measurement Library?

    The Measurement Library provides stakeholders with tools to collect data to help them strengthen the education and protection of children and youth in crisis- and conflict-affected contexts. The ML contains tools that can be used to collect data on the quality of programming and children's holistic development outcomes. These tools include measures and information on their psychometric properties, training, and guidance materials.

  2. What is the INEE Measurement Library Reference Group?

    The MLRG supports the continued development and expansion of the INEE Measurement Library by providing technical support to the Measurement Library. The key responsibilities of the MLRG members include but are not limited to:

    • Peer-reviewing and providing independent technical feedback on the Measurement Tools submitted to the Measurement Library 
    • Peer-reviewing and providing independent technical feedback on the training materials that accompany measures submitted to the Measurement Library

    In addition to these core tasks, all MLRG members are engaged in promoting awareness of the Measurement Library amongst their colleagues/networks to encourage submissions and usage of the Library. As such, the MLRG seeks to influence EiE stakeholders, including policymakers and practitioners, to use evidence-based research and reliable measures to achieve holistic learning outcomes for children in crisis contexts.

  3. How can new measures be added to the Measurement Library?

    The current information regarding the submission and review processes of potential measures can be found on the call for submissions webpage.

  4. I’d like to select a measure to use. How do I get started?

    Before selecting what other tools to use in the ML, we strongly encourage all users to consult the Measure Guidance. This resource will not only help you determine which tools are best suited to the assessment you are pursuing; it may also help you better understand the other questions you ought to be asking yourself when it comes to program design and evaluation, screening and diagnostic processes and curriculum development. 

  5. What do we mean when we say, “high-quality data”?

    “High-quality” data refers to data that is complete, consistent, valid, timely, verified and accurate. 

  6. What if I want to adapt the measures and guidance materials?

    The Measurement Library’s materials have been carefully selected and vetted to ensure they are as useful and effective as possible. While we can only vouch for the quality of the tools as they are presented, we appreciate that you may also benefit from adapting them for your own purposes. However, any use of a measure included in the Library that deviates from the use clearly specified in the measure’s landing page requires the express permission of the measure’s developer. 

    For adapting measures and guidance materials, please consult the Measure Guidance.

  7. Why aren’t more measures in the Measurement Library “ready for purpose”?

    The process of developing a measurement tool “ready for purpose” takes a considerable amount of time. It takes time to establish sufficient evidence of the validity and reliability of a measure. Reliable evidence is gathered through multiple trials in varied contexts and for specific purposes. That is why it is a bit difficult to generalize the findings of a tool from fewer studies and limited contexts. More importantly, the findings of a measure must be reproducible whenever it is subjected to reproduction attempts. This is critical for establishing the needed confidence in a measure and helps avoid making wrong decisions in programming processes.

    For more information, visit the Measurement Library Measure Review Criteria.

  8. Why are measures that are “proceed for purpose with caution” and “under development” listed in the Measurement Library?

    Generating credible research evidence takes time and requires transparency in the reporting of evidence at all stages. While some measures have either mixed or little evidence of reliability and validity, they still provide an opportunity to learn vital characteristics of the measure and to generally inform the future development of more reliable tools. These measures are therefore still published in the Measurement Library. 

    At the same time, we recognize tremendous risk in using measures with inadequate and/or inaccurate data on reliability and validity. This risk underscores the importance of transparency and the need to protect the credibility of the development process. Lessons learned along the way are critical for continuously improving the quality of the tools.