Authoring tool reference · Lectora
Lectora captions: Section 508 / VPAT-grade authoring for federal-contractor catalogues
Lectora (now ELB Learning's Lectora, with Lectora Online and Lectora Desktop variants) is the authoring tool with the longest track record at federal-civilian agencies, Department of Defense contractors, federal-grant-funded universities, military training commands, the intelligence community, and federal-contractor financial-services. Where Articulate Storyline is the dominant corporate-L&D authoring tool and Adobe Captivate is the dominant software-simulation tool, Lectora is the dominant tool for catalogues where the Section 508 / VPAT (Voluntary Product Accessibility Template) procurement bar is the entry condition, not an afterthought. Lectora's accessibility-first design — semantic HTML5 publish, manual focus-order control, explicit ARIA roles, Section-508-mode templates — pairs naturally with the captioning question. The captioning surface is conventional (slide-level closed captions on audio, video sidecar SRT/VTT, SCORM 1.2 / SCORM 2004 / xAPI / cmi5 / HTML5 publish). What's distinctive is the audit lens: federal-contractor catalogues face VPAT-evidence review on top of WCAG conformance, and the captioning-provenance log is part of the VPAT itself. Glossary-biased upstream captioning is what produces caption files clean enough to satisfy that lens.
TL;DR
Lectora supports captions in two distinct surfaces: (1) slide-level closed captions on slide audio, configured via the Audio object's properties; (2) video sidecar caption track on imported video objects, with SRT or WebVTT supported. Lectora's accessibility-first design — Section 508 templates, semantic HTML5 publish, focus-order control, explicit ARIA roles, alt-text inheritance — sits alongside the captioning surface. SCORM 1.2 / SCORM 2004 (2nd / 3rd / 4th edition) / xAPI / cmi5 / HTML5 publish carries captions inside the artefact for ingestion into TalentLMS, Docebo, Absorb, Cornerstone OnDemand, and federal-contractor LMSes (Saba, Plateau, JKO, NIH LMS, FedTalent, GSA Online University). The captioning failure mode is the same as elsewhere — generic ASR mangles regulatory citations, military-acronym registers, intelligence-community proper nouns, federal-program names — and the upstream answer is glossary-biased captioning before the audio enters the Lectora project. The Lectora-specific add: VPAT evidence requires per-asset captioning provenance metadata that lives in the LMS's per-asset metadata schema, and the federal-contractor catalogue's controlled vocabulary is denser than commercial-segment vocabularies.
What Lectora is, and where in the workflow captioning lands
Lectora is a long-running e-learning authoring tool now distributed by ELB Learning under two variants: Lectora Online (cloud-authored) and Lectora Desktop (Windows desktop). The captioning-relevant characteristics:
- Page-and-object model. Lectora's authoring metaphor is a page (analogous to a slide) containing objects (text, audio, video, action triggers, form elements). Audio objects attach to pages; closed captions attach to audio objects.
- Section 508 mode and VPAT-grade templates. Lectora ships with Section-508-mode authoring templates and an accessibility-first design ethos: focus-order control, ARIA roles, semantic HTML5 publish, alt-text inheritance from object metadata, keyboard-accessible interactions by default.
- HTML5 publish target with semantic markup. Lectora's HTML5 output is semantic-friendly, which matters for screen-reader interaction with caption tracks. The published artefact is more accessible-by-default than competitor outputs.
- Variables, actions, conditional logic. Lectora's branching-and-conditional authoring is more powerful than Storyline's trigger system at the cost of authoring complexity. Audio objects fired conditionally need captioning all the same.
- Test / Survey / Form objects. Native question-and-feedback authoring; audio feedback on questions accepts caption tracks.
- Multi-publish target. SCORM 1.2, SCORM 2004 (2nd / 3rd / 4th), xAPI (Tin Can), cmi5, AICC, HTML5, plain Web; the captions ride inside the publish artefact.
- Lectora ReviewLink (collaborative review). The reviewer-comment workflow lives inside the tooling; useful for the SME / clinical / legal-compliance reviewer pass on glossary-applied terms.
Captioning lands at three points: (1) page-level audio objects (most common — narration); (2) video objects (imported video on a page); (3) audio feedback on Test / Survey / Form objects.
The Lectora caption-upload mechanic
- Closed Captions on the Audio object. The Audio object's Properties panel exposes a Closed Captioning configuration: per-line caption entry with start-time, end-time, text. Lines can be typed by author, pasted from a script, or imported from external SRT / VTT.
- SRT / WebVTT import on Audio object. The Closed Captioning configuration accepts caption-track import from external SRT and WebVTT files. This is the path the glossary-biased workflow uses — clean SRT delivered upstream, imported into the Audio object.
- Video object caption track. The Video object accepts a sidecar caption track in SRT or WebVTT. The Video object's player surfaces the caption-toggle control to the learner.
- Test / Survey / Form audio-feedback captions. Feedback audio on questions accepts a caption track at the audio-object level.
- HTML5 publish-time embedding. SCORM 1.2 / SCORM 2004 / xAPI / cmi5 / AICC / HTML5 / Web publish carries the caption tracks inside the publish artefact. Lectora's runtime player surfaces a closed-caption toggle to the learner; the per-course default-state setting is configured at publish.
- Section 508 template default behaviour. Lectora's Section-508 templates default the closed-caption toggle to on — a small but materially-correct default for federal-contractor catalogues.
The vocabulary surface in Lectora-authored content
Lectora's federal-contractor and military-training concentration produces a captioning vocabulary surface unlike any of the commercial-segment authoring tools:
- Federal-program names and acronyms. Department of Veterans Affairs (VA), Centers for Medicare & Medicaid Services (CMS), Office of Personnel Management (OPM), Department of Homeland Security (DHS), Defense Contract Audit Agency (DCAA), Defense Logistics Agency (DLA), Defense Information Systems Agency (DISA), Defense Counterintelligence and Security Agency (DCSA), General Services Administration (GSA), National Institutes of Health (NIH), and several hundred more. Generic ASR mangles every acronym.
- Military training commands and ranks. TRADOC, FORSCOM, NETC, NETCOM, AFMC, NAVAIR, MARCORSYSCOM, SOCOM, INDOPACOM, EUCOM, AFRICOM, NORTHCOM, SOUTHCOM, CENTCOM, STRATCOM, SPACECOM, TRANSCOM. Rank registers (E-1 through E-9, O-1 through O-10, plus equivalents across services). Unit registers (battalion, brigade, squadron, wing, group, division, corps).
- Intelligence-community vocabulary. ODNI, NSA, CIA, DIA, NGA, NRO, FBI Counterintelligence, plus the controlled-information markings (Confidential, Secret, Top Secret, SCI, SAP, FOUO, CUI). Caption-track accuracy on controlled-information markings is non-negotiable.
- Federal regulation citations. CFR / USC / DFARS / FAR clause numbers. DoD Manual 5200.01, NIST SP 800-171, NIST SP 800-53, RMF (Risk Management Framework), DCID 6/3, ICD 503, IRS Pub 1075, IRS Pub 4812, HIPAA Privacy Rule and Security Rule citations. Multi-paragraph citation blocks are common in compliance training.
- Acquisition-and-contract vocabulary. FAR Part references, DFARS clauses, TINA (Truth in Negotiations Act), CAS (Cost Accounting Standards), SF 1408, NDAA section references, FedRAMP authorisation levels (Low / Moderate / High / High+), CMMC maturity levels.
- Military weapons-systems and platform names. Aircraft (F-22, F-35, B-21, KC-46, MQ-9), naval (DDG, CG, CVN, SSN, SSBN), ground (M1A2 SEPv3, Stryker, JLTV, AMPV), missile (Patriot, THAAD, Aegis), space (GBSD, GPS III). The pattern of letter-and-number designators is exactly what generic ASR mangles.
- Cybersecurity vocabulary. NIST CSF, CISA Known Exploited Vulnerabilities, MITRE ATT&CK technique IDs, CVE numbers, CWE numbers, the Common Vulnerability Scoring System (CVSS).
- VA / CMS / NIH-specific clinical and program vocabulary. VA medical-center names, NIH institute names (NCI, NIAID, NHLBI, NICHD, NIDDK, NIMH, NINDS, etc.), CMS program names (Medicare Parts A/B/C/D, Medicaid waivers, MIPS, ACO programmes).
The Lectora-specific failure modes
The five caption-related findings most likely to surface during a federal-contractor accessibility audit, an OFCCP review, a VPAT evidence-of-conformance review, a DoD Section 508 procurement-evidence review, or a federal-civilian-agency Section 508 compliance check on a Lectora-authored catalogue:
- Auto-generated ASR captions on federal-program vocabulary. Lectora's optional ASR-assisted caption generation produces generic-ASR-grade output on the federal-program acronym register, controlled-information markings, military-rank registers, and weapons-systems designators. The 508 evaluator's spot check against a randomly-selected slide will catch this within the first audit hour. Fix: glossary-biased upstream captioning, with the clean SRT imported into the Audio object.
- Section-508 template not used. Lectora's accessibility-first design requires the author to start from a 508-mode template; starting from a non-508 template means the focus-order, ARIA roles, and caption-toggle defaults are not pre-configured. The 508 evaluator will flag the default-state of the closed-caption toggle. Fix: per-project verification at publish that the 508-mode template is the source.
- VPAT entry mismatch with as-built behaviour. A VPAT (or its successor, the Accessibility Conformance Report / ACR) is the procurement-evidence document. A VPAT that asserts WCAG SC 1.2.2 conformance for prerecorded captions but the as-built artefact has caption-track gaps is a procurement-evidence failure — sometimes a contract-vehicle eligibility issue. Fix: VPAT entries are populated from the captioning-provenance log per asset, not estimated; gap-detection step before VPAT sign-off.
- Audio feedback on Test / Survey / Form objects uncaptioned. Question-feedback audio is captioned less consistently than narration. WCAG 2.1 SC 1.2.2 covers all prerecorded media. Fix: catalogue audit step that enumerates every audio-bearing object per project.
- SCORM 1.2 fall-back where the federal LMS doesn't support SCORM 2004 caption metadata. Some legacy federal LMSes (older Saba builds, JKO legacy modes, agency-specific LMS deployments) require SCORM 1.2 publish target where the modern caption-metadata behaviour is partially lost. Fix: per-LMS verification step on a sample course; document the publish-target choice in the captioning-provenance log.
The glossary-biased workflow for Lectora-authored content
- Pull the customer's controlled vocabulary. Federal-program registers, military training-command and rank registers, intelligence-community vocabulary, federal-regulation citation registers (CFR / USC / DFARS / FAR clauses, NIST SP control IDs, MITRE ATT&CK technique IDs), acquisition-and-contract vocabulary, weapons-systems / platform names, agency-specific clinical or program vocabulary. The customer's controlled vocabulary is the highest-leverage glossary input; for federal-contractor catalogues, an agency-specific vocabulary register pulled from the agency's own publications is the second.
- Caption the narration audio before importing into Lectora. Generate clean SRT with the project glossary applied. Import into the Audio object's Closed Captioning configuration via the SRT-import path; per-line timings carry over.
- Caption the video objects. Same upstream workflow: clean SRT or WebVTT, imported as a sidecar caption track on the Video object.
- Caption the Test / Survey / Form audio-feedback objects. Audio-object-level caption track per question.
- SME / legal-compliance / DoD-FSO reviewer pass. Domain-expert review is non-negotiable on federal-contractor and DoD-grade catalogues — for catalogues that touch CUI / Confidential / Secret content, an FSO-equivalent review on glossary-applied terms verifying no spillage of controlled-information markings outside the marked segments. The amber-highlight UI shows source-line provenance.
- Publish-time verification. Section-508 template confirmed as source. Per-course closed-caption-toggle default-state set to on. SCORM 2004 4th edition or xAPI / cmi5 publish target preferred; SCORM 1.2 only where the federal LMS demands it.
- VPAT / ACR evidence packet. The captioning-provenance log per asset (caption source, glossary version, reviewer, review date, glossary term count, publish target, per-course CC default verified, 508-mode template confirmed) maps to VPAT entries for SC 1.2.2 (Captions, Prerecorded), SC 1.2.4 (Captions, Live — N/A for prerecorded), and SC 1.2.5 (Audio Description, Prerecorded — adjacent surface). The VPAT auditor's evidence request is the captioning-provenance log.
Lectora-specific captioning RFP questions
Procurement teams running a captioning RFP for a Lectora-authored catalogue — usually a federal-contractor or DoD-grade procurement — will want to ask several Lectora-specific questions. From our captioning RFP template:
- SRT / WebVTT import compatibility with Lectora's Closed Captioning configuration. The vendor's caption-file output should import cleanly into Lectora's Audio-object Closed Captioning panel without manual line-by-line entry.
- Federal-program glossary support. Does the vendor handle the federal-program acronym register, military training-command and rank registers, intelligence-community vocabulary, and weapons-systems designators? Can the vendor ingest agency-specific vocabulary registers from publications such as NIH Style Guide, VA program publications, DoD glossaries (DoD Dictionary of Military and Associated Terms)?
- Controlled-information handling. For catalogues that touch CUI / Confidential / Secret content, what is the vendor's data-handling posture? Is the vendor a US-citizen-only workforce? Does the vendor's facility carry the appropriate clearance? The InfoSec lens is sharper for federal-contractor catalogues than for any commercial segment.
- VPAT / ACR evidence-packet support. Does the vendor produce VPAT / ACR-grade evidence per asset (caption source, glossary version, reviewer, review date, glossary term count, publish target, per-course CC default verified, 508-mode template confirmed)?
- SCORM 1.2 / SCORM 2004 / xAPI / cmi5 / AICC publish-target verification. Federal LMSes vary; the vendor's QA should include verification on the customer's specific LMS publish target.
- Section-508 template adherence. Does the vendor's QA include verification that the 508-mode template is the source, with focus-order and ARIA roles intact?
How Lectora captions intersect Section 508, ADA Title II, EAA, OFCCP, and the federal LMS landscape
Lectora-authored content typically faces several accessibility regimes simultaneously, with the federal-contractor segment carrying the densest audit calendar:
- Section 508 — the dominant regime for federal-contractor and federal-civilian-agency Lectora catalogues. The 2017 Refresh raised the technical bar to WCAG 2.0 AA. Lectora's accessibility-first design is built for this regime.
- Section 504 — federal-financial-assistance-recipient Lectora catalogues (universities, hospitals, large non-profits with federal grants) face the functional-access standard.
- ADA Title II — state and local government Lectora-authored mandatory training carries the 2026-04-24 WCAG 2.1 AA bar.
- European Accessibility Act — EU-deployed Lectora catalogues (federal-contractor B2C surfaces deployed in EU markets) face EN 301 549 / WCAG 2.1 AA. See our EAA Q3 2026 enforcement landscape post.
- VPAT / ACR procurement-evidence review — federal contracts using GSA Schedule 70 (now Multiple Award Schedule IT category) require VPAT/ACR for IT products and services. The captioning-provenance log per asset feeds the VPAT entry for SC 1.2.2.
- OFCCP audits — federal contractors face OFCCP review of accessibility evidence as part of affirmative-action plan review.
- DoD Section 508 procurement-evidence review — DoD-specific 508 evaluation overlay on top of the federal-civilian baseline. Caption tracks on DoD training catalogues are a routine evaluation point.
- NIST RMF authorisation — for catalogues hosted on Authority-to-Operate (ATO) systems, the captioning vendor's data-handling posture is part of the system-security-plan evidence.
- OCR HIPAA workforce-training file review — VA and other federal-healthcare-system Lectora catalogues face the OCR HIPAA workforce-training-documentation lens. See HIPAA training captions.
- OSHA / MSHA / DoE — federal-contractor safety training. See safety training captions.
The technical caption requirement at WCAG SC 1.2.2 is consistent across regimes; Lectora's accessibility-first design means the caption track and the runtime CC toggle are configured by default. The captioning-provenance log per asset is the audit-evidence shape; for federal-contractor Lectora catalogues, it feeds the VPAT/ACR entries directly.
Related questions
Lectora vs Storyline vs Captivate for federal-contractor catalogues — which is the right tool?
All three can produce 508-conformant content; Lectora's accessibility-first defaults reduce the per-project cost of getting there. Storyline catalogues need explicit accessibility configuration on every interactive object; Captivate needs the Show Closed Captions skin setting plus per-project verification. Lectora's 508-mode template starts from accessibility-first and the author has to go out of their way to break it. For catalogues whose entire purpose is 508 conformance (federal-civilian agency, DoD, IC), Lectora is the safer default.
Does Lectora Online (cloud) differ from Lectora Desktop in captioning?
The captioning surface is the same on both — Audio-object Closed Captioning, Video-object sidecar caption track, Test/Survey/Form audio-feedback captions. The cloud-vs-desktop distinction is the authoring environment, not the publish artefact. The glossary-biased upstream workflow is unchanged.
How does the captioning-provenance log feed the VPAT?
The VPAT (now formally the Accessibility Conformance Report / ACR following the WCAG 2.x updates) has rows for each WCAG success criterion with a Conformance Level column and a Remarks and Explanations column. SC 1.2.2 (Captions, Prerecorded) is the most relevant row for prerecorded training. The captioning-provenance log per asset (caption source, glossary version, reviewer, review date, glossary term count, publish target, per-course CC default verified, 508-mode template confirmed) is the per-asset evidence behind the row's Conformance Level assertion. Where the catalogue has a known gap (uncaptioned legacy assets in retrofit), the VPAT row's Remarks column documents the remediation plan.
What about controlled-information catalogues (CUI / Confidential / Secret training)?
For controlled-information Lectora catalogues, the captioning vendor's data-handling posture is part of the procurement criterion — a vendor without the appropriate facility clearance, US-citizen workforce, and ATO-compatible data-flow is not eligible for the work. The glossary-biased upstream workflow is the same; the operational delivery path runs inside the customer's FedRAMP-authorised or ATO-authorised environment with the customer's own glossary management.
Does Lectora support multi-language caption tracks on the same audio object?
Lectora supports localised projects (one project per language) and runtime language-selection via the Variable / Action system. Multi-track captions on a single audio object are not the canonical pattern; the canonical pattern is a per-language project with its own caption tracks, with learner self-selection by language at the LMS layer.
How long does a Lectora federal-contractor back-catalogue retrofit typically take?
The dominant time cost on a federal-contractor Lectora retrofit is the SME / FSO reviewer pass on glossary-applied terms — federal-program acronym registers and controlled-information markings need a careful pass, and the SME/FSO availability is typically the bottleneck. A 500-asset catalogue retrofit runs over 8 to 16 weeks depending on the SME/FSO availability and the controlled-information density. The captioning vendor's throughput is rarely the constraint.
Further reading
- Articulate Storyline captions reference
- Articulate Rise captions reference
- Adobe Captivate captions reference
- iSpring captions reference
- Camtasia captions reference
- Cornerstone OnDemand captions reference
- Section 508 captions: federal contractor flow-down
- Section 504 captions: federal-financial-assistance functional access
- Compliance training captions
- Safety training captions
- Captioning RFP template — 14 questions to ask any vendor