Authoring tool reference · Lectora

Lectora captions: Section 508 / VPAT-grade authoring for federal-contractor catalogues

Lectora (now ELB Learning's Lectora, with Lectora Online and Lectora Desktop variants) is the authoring tool with the longest track record at federal-civilian agencies, Department of Defense contractors, federal-grant-funded universities, military training commands, the intelligence community, and federal-contractor financial-services. Where Articulate Storyline is the dominant corporate-L&D authoring tool and Adobe Captivate is the dominant software-simulation tool, Lectora is the dominant tool for catalogues where the Section 508 / VPAT (Voluntary Product Accessibility Template) procurement bar is the entry condition, not an afterthought. Lectora's accessibility-first design — semantic HTML5 publish, manual focus-order control, explicit ARIA roles, Section-508-mode templates — pairs naturally with the captioning question. The captioning surface is conventional (slide-level closed captions on audio, video sidecar SRT/VTT, SCORM 1.2 / SCORM 2004 / xAPI / cmi5 / HTML5 publish). What's distinctive is the audit lens: federal-contractor catalogues face VPAT-evidence review on top of WCAG conformance, and the captioning-provenance log is part of the VPAT itself. Glossary-biased upstream captioning is what produces caption files clean enough to satisfy that lens.

TL;DR

Lectora supports captions in two distinct surfaces: (1) slide-level closed captions on slide audio, configured via the Audio object's properties; (2) video sidecar caption track on imported video objects, with SRT or WebVTT supported. Lectora's accessibility-first design — Section 508 templates, semantic HTML5 publish, focus-order control, explicit ARIA roles, alt-text inheritance — sits alongside the captioning surface. SCORM 1.2 / SCORM 2004 (2nd / 3rd / 4th edition) / xAPI / cmi5 / HTML5 publish carries captions inside the artefact for ingestion into TalentLMS, Docebo, Absorb, Cornerstone OnDemand, and federal-contractor LMSes (Saba, Plateau, JKO, NIH LMS, FedTalent, GSA Online University). The captioning failure mode is the same as elsewhere — generic ASR mangles regulatory citations, military-acronym registers, intelligence-community proper nouns, federal-program names — and the upstream answer is glossary-biased captioning before the audio enters the Lectora project. The Lectora-specific add: VPAT evidence requires per-asset captioning provenance metadata that lives in the LMS's per-asset metadata schema, and the federal-contractor catalogue's controlled vocabulary is denser than commercial-segment vocabularies.

What Lectora is, and where in the workflow captioning lands

Lectora is a long-running e-learning authoring tool now distributed by ELB Learning under two variants: Lectora Online (cloud-authored) and Lectora Desktop (Windows desktop). The captioning-relevant characteristics:

Captioning lands at three points: (1) page-level audio objects (most common — narration); (2) video objects (imported video on a page); (3) audio feedback on Test / Survey / Form objects.

The Lectora caption-upload mechanic

The vocabulary surface in Lectora-authored content

Lectora's federal-contractor and military-training concentration produces a captioning vocabulary surface unlike any of the commercial-segment authoring tools:

The Lectora-specific failure modes

The five caption-related findings most likely to surface during a federal-contractor accessibility audit, an OFCCP review, a VPAT evidence-of-conformance review, a DoD Section 508 procurement-evidence review, or a federal-civilian-agency Section 508 compliance check on a Lectora-authored catalogue:

  1. Auto-generated ASR captions on federal-program vocabulary. Lectora's optional ASR-assisted caption generation produces generic-ASR-grade output on the federal-program acronym register, controlled-information markings, military-rank registers, and weapons-systems designators. The 508 evaluator's spot check against a randomly-selected slide will catch this within the first audit hour. Fix: glossary-biased upstream captioning, with the clean SRT imported into the Audio object.
  2. Section-508 template not used. Lectora's accessibility-first design requires the author to start from a 508-mode template; starting from a non-508 template means the focus-order, ARIA roles, and caption-toggle defaults are not pre-configured. The 508 evaluator will flag the default-state of the closed-caption toggle. Fix: per-project verification at publish that the 508-mode template is the source.
  3. VPAT entry mismatch with as-built behaviour. A VPAT (or its successor, the Accessibility Conformance Report / ACR) is the procurement-evidence document. A VPAT that asserts WCAG SC 1.2.2 conformance for prerecorded captions but the as-built artefact has caption-track gaps is a procurement-evidence failure — sometimes a contract-vehicle eligibility issue. Fix: VPAT entries are populated from the captioning-provenance log per asset, not estimated; gap-detection step before VPAT sign-off.
  4. Audio feedback on Test / Survey / Form objects uncaptioned. Question-feedback audio is captioned less consistently than narration. WCAG 2.1 SC 1.2.2 covers all prerecorded media. Fix: catalogue audit step that enumerates every audio-bearing object per project.
  5. SCORM 1.2 fall-back where the federal LMS doesn't support SCORM 2004 caption metadata. Some legacy federal LMSes (older Saba builds, JKO legacy modes, agency-specific LMS deployments) require SCORM 1.2 publish target where the modern caption-metadata behaviour is partially lost. Fix: per-LMS verification step on a sample course; document the publish-target choice in the captioning-provenance log.

The glossary-biased workflow for Lectora-authored content

  1. Pull the customer's controlled vocabulary. Federal-program registers, military training-command and rank registers, intelligence-community vocabulary, federal-regulation citation registers (CFR / USC / DFARS / FAR clauses, NIST SP control IDs, MITRE ATT&CK technique IDs), acquisition-and-contract vocabulary, weapons-systems / platform names, agency-specific clinical or program vocabulary. The customer's controlled vocabulary is the highest-leverage glossary input; for federal-contractor catalogues, an agency-specific vocabulary register pulled from the agency's own publications is the second.
  2. Caption the narration audio before importing into Lectora. Generate clean SRT with the project glossary applied. Import into the Audio object's Closed Captioning configuration via the SRT-import path; per-line timings carry over.
  3. Caption the video objects. Same upstream workflow: clean SRT or WebVTT, imported as a sidecar caption track on the Video object.
  4. Caption the Test / Survey / Form audio-feedback objects. Audio-object-level caption track per question.
  5. SME / legal-compliance / DoD-FSO reviewer pass. Domain-expert review is non-negotiable on federal-contractor and DoD-grade catalogues — for catalogues that touch CUI / Confidential / Secret content, an FSO-equivalent review on glossary-applied terms verifying no spillage of controlled-information markings outside the marked segments. The amber-highlight UI shows source-line provenance.
  6. Publish-time verification. Section-508 template confirmed as source. Per-course closed-caption-toggle default-state set to on. SCORM 2004 4th edition or xAPI / cmi5 publish target preferred; SCORM 1.2 only where the federal LMS demands it.
  7. VPAT / ACR evidence packet. The captioning-provenance log per asset (caption source, glossary version, reviewer, review date, glossary term count, publish target, per-course CC default verified, 508-mode template confirmed) maps to VPAT entries for SC 1.2.2 (Captions, Prerecorded), SC 1.2.4 (Captions, Live — N/A for prerecorded), and SC 1.2.5 (Audio Description, Prerecorded — adjacent surface). The VPAT auditor's evidence request is the captioning-provenance log.

See pricing

Lectora-specific captioning RFP questions

Procurement teams running a captioning RFP for a Lectora-authored catalogue — usually a federal-contractor or DoD-grade procurement — will want to ask several Lectora-specific questions. From our captioning RFP template:

How Lectora captions intersect Section 508, ADA Title II, EAA, OFCCP, and the federal LMS landscape

Lectora-authored content typically faces several accessibility regimes simultaneously, with the federal-contractor segment carrying the densest audit calendar:

The technical caption requirement at WCAG SC 1.2.2 is consistent across regimes; Lectora's accessibility-first design means the caption track and the runtime CC toggle are configured by default. The captioning-provenance log per asset is the audit-evidence shape; for federal-contractor Lectora catalogues, it feeds the VPAT/ACR entries directly.

Related questions

Lectora vs Storyline vs Captivate for federal-contractor catalogues — which is the right tool?

All three can produce 508-conformant content; Lectora's accessibility-first defaults reduce the per-project cost of getting there. Storyline catalogues need explicit accessibility configuration on every interactive object; Captivate needs the Show Closed Captions skin setting plus per-project verification. Lectora's 508-mode template starts from accessibility-first and the author has to go out of their way to break it. For catalogues whose entire purpose is 508 conformance (federal-civilian agency, DoD, IC), Lectora is the safer default.

Does Lectora Online (cloud) differ from Lectora Desktop in captioning?

The captioning surface is the same on both — Audio-object Closed Captioning, Video-object sidecar caption track, Test/Survey/Form audio-feedback captions. The cloud-vs-desktop distinction is the authoring environment, not the publish artefact. The glossary-biased upstream workflow is unchanged.

How does the captioning-provenance log feed the VPAT?

The VPAT (now formally the Accessibility Conformance Report / ACR following the WCAG 2.x updates) has rows for each WCAG success criterion with a Conformance Level column and a Remarks and Explanations column. SC 1.2.2 (Captions, Prerecorded) is the most relevant row for prerecorded training. The captioning-provenance log per asset (caption source, glossary version, reviewer, review date, glossary term count, publish target, per-course CC default verified, 508-mode template confirmed) is the per-asset evidence behind the row's Conformance Level assertion. Where the catalogue has a known gap (uncaptioned legacy assets in retrofit), the VPAT row's Remarks column documents the remediation plan.

What about controlled-information catalogues (CUI / Confidential / Secret training)?

For controlled-information Lectora catalogues, the captioning vendor's data-handling posture is part of the procurement criterion — a vendor without the appropriate facility clearance, US-citizen workforce, and ATO-compatible data-flow is not eligible for the work. The glossary-biased upstream workflow is the same; the operational delivery path runs inside the customer's FedRAMP-authorised or ATO-authorised environment with the customer's own glossary management.

Does Lectora support multi-language caption tracks on the same audio object?

Lectora supports localised projects (one project per language) and runtime language-selection via the Variable / Action system. Multi-track captions on a single audio object are not the canonical pattern; the canonical pattern is a per-language project with its own caption tracks, with learner self-selection by language at the LMS layer.

How long does a Lectora federal-contractor back-catalogue retrofit typically take?

The dominant time cost on a federal-contractor Lectora retrofit is the SME / FSO reviewer pass on glossary-applied terms — federal-program acronym registers and controlled-information markings need a careful pass, and the SME/FSO availability is typically the bottleneck. A 500-asset catalogue retrofit runs over 8 to 16 weeks depending on the SME/FSO availability and the controlled-information density. The captioning vendor's throughput is rarely the constraint.

Further reading