Question Analysis in OnTarget helps you understand how well your test questions are working. Think of it as a health check for your assessments – it tells you which questions are doing their job effectively and which ones might need some attention.
The analysis looks at two main things:
- How difficult each question is (P-value)
- How well each question separates high and low performers (Point Biserial Correlation)
Understanding Question Difficulty (P-Value)
What It Means
The P-value tells you what percentage of students got a question right. It’s like asking, “Out of 100 students, how many would typically answer this question correctly?”
The Sweet Spot: 30-70%
OnTarget considers questions most effective when 30-70% of students answer them correctly. Here’s why:
- Above 70%: Question might be too easy and won’t help you distinguish between different skill levels
- 30-70%: Perfect range – challenging enough to be meaningful but fair enough that prepared students can succeed
- Below 30%: Question might be too difficult, poorly written, or cover content students haven’t mastered
What This Looks Like in Practice
- P-value of 85%: Most students got this right – might be testing basic knowledge or review material
- P-value of 50%: Half the students got this right – good discriminating question
- P-value of 15%: Very few students got this right – may need review or revision
Understanding Question Quality (Point Biserial Correlation)
What It Measures
This statistic tells you whether your high-performing students are getting the question right while your struggling students are getting it wrong. Essentially, it asks: “Is this question working the way it should?”
OnTarget’s Quality Categories
Very Good Questions (0.40 and above)
- Your star performers! These questions effectively separate high and low achievers
- Students who know the material get them right; students who don’t, get them wrong
- Keep these questions – they’re doing exactly what they should
Good Questions (0.30-0.39)
- Solid questions that work well
- Good discrimination between different skill levels
- Definitely worth keeping
Fairly Good Questions (0.20-0.29)
- Acceptable questions but could be improved
- Still useful but consider small revisions to make them even better
Marginal Questions (0.10-0.19)
- These questions aren’t working as well as they should
- Consider major revisions or replacement
Poor Questions (0.09 and below)
- Red flag! These questions aren’t helping you assess student learning
- Remove these questions from future assessments
Warning Signs
If a question has a negative correlation, it means your best students are getting it wrong while your struggling students are getting it right. This usually indicates a problem with the question itself.
Making Sense of Your Results Together
The Best Questions
- Difficulty: 30-70% of students answer correctly
- Quality: 0.20 or higher discrimination
- Action: Keep these questions – they’re working perfectly!
Questions That Need Attention
- Too Easy + Good Quality: Still useful but consider for review material or confidence builders
- Too Hard + Good Quality: May be appropriate for advanced students or end-of-year assessments
- Any Difficulty + Poor Quality: These need revision or removal
Beyond the Numbers: Validity Evidence Review
OnTarget’s question analysis aligns with professional testing standards outlined in the TEA Technical Digest. The Texas Education Agency emphasizes that valid assessments must provide strong evidence that they measure what they’re intended to measure. This is called validity evidence, and OnTarget helps you gather this evidence through both statistical analysis and qualitative review.
Why Validity Matters
According to the TEA Technical Digest, when we use test scores to make decisions about student achievement, our assessments must support those decisions. In other words, if we say a student has mastered a learning objective based on their test performance, we need to be confident that our test actually measured that objective effectively.
The Seven Pillars of Valid Questions
OnTarget evaluates each question against seven key validity criteria based on TEA standards. Good questions should:
1. TEKS Alignment
- Questions directly measure specific Texas Essential Knowledge and Skills standards
- Content matches what you’ve taught and what students are expected to learn
- Clear connection between the question and the learning objective
2. Bias and Sensitivity
- Questions are fair to all student groups regardless of gender, ethnicity, or background
- No content that might disadvantage certain students due to cultural references
- Language and scenarios are inclusive and appropriate
3. Language and Vocabulary
- Reading level is appropriate for your grade level
- Technical terms are necessary and have been taught
- Sentence structure is clear and not unnecessarily complex
4. Structure and Context
- Question format is familiar to students
- Real-world contexts are meaningful and relevant
- Information provided is necessary and not distracting
5. Answer Choices
- One clearly correct answer that experts would agree upon
- Distractors (wrong answers) represent common misconceptions or errors
- No “throwaway” options that are obviously incorrect
6. Visuals
- All graphs, pictures, and diagrams are clear and necessary
- Visual elements support the question rather than confuse it
- Information in visuals is accurate and up-to-date
7. Data Sources
- Any data, statistics, or factual information used is accurate and current
- Sources are reliable and appropriate for the educational context
- Information supports the learning objective being measured
How Statistical and Validity Evidence Work Together
The TEA Technical Digest emphasizes that strong assessments need both statistical evidence (like P-values and correlations) and content evidence (like the seven criteria above). OnTarget brings these together by:
- Flagging statistical problems that might indicate content issues
- Highlighting questions that need both statistical and content review
- Providing space for notes about validity concerns you discover
- Tracking your decisions about question revisions or removal
For example, a question with poor discrimination might have statistical issues, but the real problem could be unclear wording, biased content, or confusing answer choices. OnTarget helps you investigate both the numbers and the content quality.
What OnTarget Shows You
When you run the analysis, you’ll see:
- Each question’s difficulty level and quality rating
- Which questions need attention
- Space to record what action you’ll take for each question
- Room for notes about specific concerns or plans
Taking Action: What To Do Next
For Individual Questions
- Keep: Questions in the sweet spot with good quality
- Revise: Questions with potential but need improvement
- Remove: Questions that aren’t working and can’t be easily fixed
For Your Assessment Overall
- Look for patterns – are most questions too easy or too hard?
- Check if your assessment covers all important learning objectives
- Consider the balance between different difficulty levels
For Your Teaching
- Questions most students missed might indicate areas needing re-teaching
- Very easy questions might show areas where students are ready to move on
- Use the data to adjust your instruction and future assessments
Why This Matters for Student Learning and TEA Standards
Quality assessments help you:
- Accurately measure what students have learned according to TEKS standards
- Identify gaps in understanding that need attention before state assessments
- Build student confidence with appropriately challenging questions that mirror state test quality
- Make informed decisions about pacing and re-teaching based on reliable data
- Prepare students for STAAR and other state assessments through high-quality practice
Connection to State Assessment Preparation
The TEA Technical Digest notes that all state assessments undergo rigorous statistical and content review using these same principles. When you use OnTarget to improve your classroom assessments, you’re:
- Applying the same standards used for state tests to your classroom assessments
- Giving students practice with high-quality questions that function like state assessment items
- Building assessment literacy that transfers to standardized testing situations
- Ensuring your data accurately reflects student readiness for state assessments
Meeting Professional Standards
Using OnTarget’s question analysis helps you meet the professional expectation that classroom assessments provide valid, reliable information about student learning. The TEA Technical Digest emphasizes that educators should use multiple sources of validity evidence – exactly what OnTarget provides through its combined statistical and content analysis.
Getting Started
- Upload your assessment data to OnTarget
- Review the analysis results – start with questions flagged as problematic
- Make notes about what you observe
- Plan revisions for questions that need attention
- Keep track of which questions work well for future use
Remember: The goal isn’t to have all questions in the perfect range, but to understand what each question is telling you about student learning and to continuously improve your assessments using the same professional standards applied to state assessments.
The TEA Technical Digest reminds us that assessment is an ongoing process of gathering evidence about student learning. OnTarget gives you the tools and data you need to make thoughtful, student-centered decisions that align with state standards and professional best practices.
By using OnTarget’s question analysis, you’re not just improving your tests – you’re joining the professional community of educators who use evidence-based practices to ensure every assessment serves student learning effectively.