eLearning Testing: Why EdTech Products Fail and How to Test Them Properly
eLearning Testing: Why EdTech Products Fail and How to Test Them Properly
The short answer: eLearning products carry quality obligations that generic software products don't. When a quiz engine miscalculates a pass mark, a learner fails a certification they should have passed. When SCORM completion status doesn't report correctly, a compliance record shows incomplete when the learner finished. When an LMS times out under peak load during a mandatory training deadline, thousands of employees can't complete required training. These aren't abstract software failures — they have real consequences. Testing eLearning products properly requires understanding the domain, not just the technology.
Why eLearning Testing Is Different
The primary difference between eLearning testing and general software testing is consequence. A broken button on a marketing website is annoying. A broken submit button on a professional certification assessment has legal and career implications. A SCORM package that doesn't report completion to an LMS means a learner's compliance record is inaccurate — which in regulated industries (financial services, healthcare, nuclear, aviation) is a regulatory issue.
The second difference is the ecosystem complexity. eLearning content runs inside an LMS, delivered via iframe, communicating with the LMS via a JavaScript API (SCORM or xAPI). The content has no direct control over the environment it runs in — browser, LMS platform, network conditions, device type — and must work correctly in all of them. Testing a SCORM package in isolation tells you almost nothing about how it will behave in a specific LMS on a specific device in a specific browser.
The Five Layers of eLearning Testing
1. SCORM & xAPI Protocol Testing
SCORM (Sharable Content Object Reference Model) is the most widely used standard for communication between eLearning content and an LMS. SCORM 1.2 and SCORM 2004 are both in active use; xAPI (Experience API, also called Tin Can) is the more modern successor used by newer platforms.
SCORM failures are the most consequential category of eLearning bugs. Common failure modes:
Completion status not reporting. The content finishes but the LMS still shows "incomplete". This is the single most reported eLearning bug. It can be caused by the content not calling LMSFinish() correctly, a race condition between the final data write and the terminate call, or an LMS-side timeout that drops the connection before the final status is written.
Score not transmitting. The learner completes a quiz with 85% but the LMS records 0 or nothing. Often caused by incorrect use of cmi.core.score.raw versus cmi.core.lesson_score in SCORM 1.2, or incorrect scaled score formatting in SCORM 2004.
Suspend data corruption on resume. SCORM suspend data stores the learner's progress so they can resume mid-module. Corruption on resume — either losing all progress or resuming to the wrong point — is typically caused by suspend data exceeding the LMS's character limit (SCORM 1.2 limits suspend data to 4096 characters; some LMS platforms enforce lower limits).
Completion triggering incorrectly. Content reports completion before the learner has actually finished — typically caused by a completion trigger firing on module load rather than module completion. This is particularly common in branching scenarios where multiple completion paths exist.
xAPI statement delivery failures. xAPI statements are sent as HTTP requests to a Learning Record Store (LRS). In network-constrained environments or under LRS load, statement delivery can fail silently — leaving learner records incomplete without any visible error.
Testing approach: Use the SCORM Test Track (official ADL conformance tool) for baseline protocol validation. Then test in your actual target LMS environments — Moodle, Canvas, Blackboard, TalentLMS — because each has its own SCORM handling quirks that conformance tools don't replicate. Test the resume scenario explicitly by closing the browser mid-module and reopening.
2. LMS Integration Testing
Every LMS handles SCORM and content delivery slightly differently. What works perfectly in Moodle may fail in Canvas; what works in Canvas may fail in Blackboard. This is not a theoretical concern — we have seen content that passes SCORM conformance testing and works flawlessly in one LMS exhibit completion failures, display issues, and data reporting errors in a second LMS that the client also deploys to.
Browser and iframe behaviour: LMS platforms deliver content inside an iframe. Browser security policies (particularly Content Security Policy headers, same-origin restrictions, and third-party cookie blocking) affect how SCORM communication works. Safari's Intelligent Tracking Prevention and Chrome's third-party cookie deprecation have both caused SCORM failures in LMS platforms that weren't updated to handle them.
LMS-specific quirks to test for:
- Moodle: SCORM completion mode settings ("SCORM defines" vs "Moodle defines") affect whether completion is driven by the content or by LMS activity completion settings. Both modes must be tested.
- Canvas: Canvas's SCORM player has specific behaviour around window.open calls and popup handling that differs from other platforms.
- Blackboard: Legacy Blackboard Learn (Original experience) has different iframe handling from the Ultra experience. Content that works in one may not work in the other.
- Cornerstone / SAP SuccessFactors: Enterprise LMS platforms often have stricter CSP headers and more aggressive session timeout settings.
3. Assessment & Quiz Logic Testing
Assessment logic is where eLearning bugs have the most direct learner impact. Key test areas:
Score calculation: Test every scoring edge case — minimum possible score, maximum possible score, pass threshold boundary values (one mark below pass, exactly at pass, one mark above pass). Test partial credit schemes, penalty marking, and weighted question scoring. Verify that the score reported to the LMS matches what the learner sees on the results screen.
Question randomisation: If questions are drawn from a pool, verify that the randomisation algorithm produces valid question sets — no duplicate questions, no invalid combinations (where answer options from different question variants are mixed), and that all required question types appear with appropriate frequency.
Branching and conditional logic: Adaptive content that branches based on learner responses must be tested across all significant paths. Map the branching logic and test each path explicitly — particularly paths that reach completion through unconventional routes, as these most commonly expose completion trigger failures.
Retry and reattempt behaviour: What happens when a learner retakes a quiz? Does the LMS record the latest attempt, the highest attempt, or the first attempt? Does the content correctly reset state between attempts? Does retry logic correctly lock after the maximum attempt count is reached?
Timed assessments: For timed quizzes, test what happens when the timer expires — are answers submitted automatically, or do they need manual submission? What happens if the browser tab is hidden or the device sleeps? Does the timer continue running in the background?
4. Cross-Device and Browser Compatibility
eLearning users access content on a far wider range of devices than enterprise software users. Testing must cover:
School and university Chromebooks: Managed Chromebooks with enterprise policies have restricted browser capabilities — extensions may be blocked, third-party cookies may be disabled, popup windows may not be allowed. Content that works on a personal laptop may fail on a managed school Chromebook.
iPads: A significant portion of K-12 eLearning is delivered on iPads. iOS Safari has specific behaviour around localStorage (used by some SCORM players), audio autoplay restrictions, and fullscreen video. Test on physical iPad devices — simulators do not accurately replicate Safari's restrictions.
Older corporate desktop browsers: Corporate environments often run on locked-down versions of Windows with older browsers or managed Chrome installations. Verify minimum supported browser versions and test on them.
Low-bandwidth environments: Learners in rural areas, on mobile data, or in bandwidth-constrained corporate networks experience slower content load times. Test that content loads gracefully on slow connections — ideally with network throttling enabled — and that video content adapts bitrate or shows appropriate buffering states rather than failing silently.
Responsive and adaptive layouts: eLearning content designed for desktop often renders poorly on mobile or tablet screen sizes. Verify that all content elements — text, images, interactive elements, assessment questions — are readable and operable at mobile viewport sizes if mobile access is expected.
5. Accessibility for Learners with Disabilities
Educational content has some of the strongest accessibility obligations of any digital product category. Publicly funded educational institutions in the UK must comply with the Public Sector Bodies Accessibility Regulations, which require WCAG 2.1 AA compliance (being updated to WCAG 2.2). The European Accessibility Act extends these requirements to commercial EdTech products.
Beyond legal compliance, accessibility in eLearning is a learner inclusion issue. A screen reader user who can't navigate your course assessment has been excluded from the learning outcome — that's a failure at every level.
Key accessibility test areas for eLearning:
Screen reader navigation: Every slide, page, and interactive element must be navigable by screen reader. Quiz questions must be presented in logical order with clearly associated answer options. Feedback messages (correct/incorrect) must be announced. Course navigation controls must have accessible names.
Video and audio content: All video must have accurate captions. All audio must have a transcript. Audio descriptions are required for videos where important visual information is not conveyed in the dialogue. This includes instructional animations and screen capture tutorials where visual actions are central to the content.
Keyboard operability: Every interaction — slide navigation, quiz answering, drag-and-drop activities, hotspot interactions — must be operable via keyboard. Custom interaction types (matching exercises, sorting activities, clickable diagrams) are the most common keyboard accessibility failure points in eLearning content.
Cognitive accessibility: eLearning content reaches learners with dyslexia, ADHD, and learning differences at significant scale. Plain language, consistent navigation, clear progress indicators, and flexible time limits all reduce cognitive load for these learners — and improve the experience for everyone.
Performance Testing for eLearning Platforms
Synchronous learning events — scheduled assessment releases, live webinar sessions, mandatory compliance training deadlines — create traffic spikes that are qualitatively different from steady-state usage.
A corporate LMS with 10,000 employees where a mandatory compliance training deadline falls on Friday at 5pm will experience a significant concurrent user spike in the final hours before deadline. An LMS that handles 200 concurrent users comfortably may exhibit session failures, data loss, and queue overflow at 2,000 concurrent users — a scenario that is entirely predictable but often untested.
Key performance test scenarios:
- Concurrent session establishment at peak load (thousands of simultaneous logins)
- Assessment submission throughput — the number of quiz result writes per second the LMS database can handle
- Video streaming delivery under concurrent viewer load
- SCORM data write latency under high concurrency (slow writes cause suspend data timeouts)
- Recovery behaviour when capacity is exceeded — does the platform queue requests gracefully or drop data?
Common Mistakes We See in eLearning Testing
Testing only in one LMS. If your content deploys to multiple LMS platforms, it must be tested in each one. SCORM behaviour is platform-specific enough that single-LMS testing misses a significant category of real-world failures.
Skipping the resume scenario. Resume-from-suspend is one of the most commonly broken SCORM features and one of the least tested. Always test closing the browser mid-module and reopening to verify that progress is correctly restored.
Not testing assessment edge cases. Passing score thresholds, maximum and minimum possible scores, and reattempt behaviour are the assessment scenarios most likely to cause learner-impacting bugs — and they are routinely under-tested because they require deliberate scenario construction.
Ignoring managed device restrictions. Content tested on a developer's MacBook will not necessarily work on a school-issue Chromebook with managed policies. Test on representative devices for your actual user base.
Treating accessibility as optional for EdTech. Accessibility compliance is a legal requirement for publicly funded educational content. More importantly, it determines whether learners with disabilities can actually access the learning you've built.
Key Takeaways
- SCORM completion failures are the most consequential eLearning bug category — test resume, completion trigger, and score reporting explicitly
- Test in your actual target LMS environments, not just a conformance checker — LMS-specific behaviour causes real-world failures that protocol testing misses
- Assessment scoring edge cases (pass threshold, partial credit, retry logic) must be tested explicitly — incorrect scoring has direct learner and regulatory consequences
- Managed device testing (Chromebooks, iPads, corporate browsers) is essential — content that works on a development laptop may fail on a managed school device
- Performance testing for synchronous events (assessment deadlines, mandatory training) must simulate realistic concurrent load, not single-user scenarios
- Accessibility is a legal requirement for publicly funded educational content and a learner inclusion issue for all EdTech products