Authoring tool reference · Adobe Captivate

Adobe Captivate captions: closed-caption slide objects, simulation captions, responsive publish

Adobe Captivate is the third major rapid-authoring tool used in corporate L&D after Articulate Storyline and Camtasia. Where Storyline owns the slide-and-trigger paradigm and Camtasia owns the timeline-based screen-recording paradigm, Captivate is the tool L&D teams reach for when the catalogue is dominated by software simulations, responsive multi-device output, and complex interaction logic that needs more than Storyline's trigger system. The captioning surface is materially different from both: Captivate has a first-class Closed Caption slide-object type with its own timeline, plus closed-captions on software-simulation recording, plus the option to import an external WebVTT track, plus a responsive HTML5 publish that needs the captions to survive the Fluid Boxes / Liquid Layout breakpoint pivot. Glossary-biased captioning at the source — before the Captivate project picks up the audio — is the workflow that produces caption files clean enough to satisfy Section 508, ADA Title II, the European Accessibility Act, and the OFCCP procurement-evidence review on a software-simulation-heavy training catalogue.

TL;DR

Adobe Captivate supports captions in three distinct surfaces: (1) Closed Caption slide-object attached to slide audio with its own per-line timing; (2) software-simulation closed captions generated automatically from the recording's mouse and keystroke actions, fully editable; (3) WebVTT or SRT import on slide-level video. Publish output is HTML5 (responsive via Fluid Boxes / Liquid Layout for multi-device delivery) or SCORM 1.2 / SCORM 2004 / xAPI / PDF; captions ride inside the publish package and survive ingestion into TalentLMS, Docebo, Absorb, Cornerstone OnDemand, and Healthstream. The captioning failure mode is the same as in every other tool — generic ASR mangles SDK names, drug names, regulatory citations, internal acronyms — and the upstream answer is the same: glossary-biased captioning before the audio enters the Captivate project. The Captivate-specific add: software-simulation closed captions need a glossary-aware pass on the auto-generated mouse-and-keystroke labels to fix UI-element proper-noun mangling.

What Captivate is, and where in the workflow captioning lands

Adobe Captivate (currently shipping as the all-new Captivate, succeeding the long-running classic Captivate line) is Adobe's flagship rapid-authoring tool for L&D. The captioning-relevant characteristics:

Captioning lands at four points in a Captivate project: (1) slide audio (most common — narration over instructional content); (2) software-simulation auto-generated captions (the recorded actions); (3) imported video on a slide (with WebVTT / SRT sidecar); (4) audio feedback on quiz / interaction objects.

The Captivate caption-upload mechanic

The vocabulary surface in Captivate-authored content

Captivate's strength — software simulation, responsive output, complex interactions — concentrates the captioning vocabulary surface in distinctive ways:

The Captivate-specific failure modes

The five caption-related findings most likely to surface during a 508 audit, an OFCCP review, an EAA inspection, or an OCR HIPAA workforce-training file review on a Captivate-authored catalogue:

  1. Closed Captions authored, Show Closed Captions setting not enabled. The captions exist in the source project but the published player does not surface them. The auditor opens the published course, the captions are absent, the finding lands. Fix: project-level Show Closed Captions setting enabled, plus a per-project verification step before publish.
  2. Software-simulation auto-captions left as defaults. The recorded software-action labels carry generic UI-element wording that mangles the customer's actual UI vocabulary. A Salesforce-training catalogue with auto-captions reading "Click the Salesforce App Builder" instead of "Click the Salesforce Lightning App Builder" is a fidelity gap. The auditor's spot check is "does the caption match the screen?" — when it doesn't, the finding lands. Fix: glossary-aware review pass on every software-simulation slide.
  3. Responsive breakpoint truncation. Long caption lines wrap awkwardly at small breakpoints in Fluid Boxes / Liquid Layout output. Auditors testing on tablet / mobile breakpoints catch the readability failure. Fix: caption-line-length budgeting (~32 characters per line for mobile-first responsive output).
  4. Audio-feedback object captions absent. Quiz feedback and interactive-object feedback audio is captioned less consistently than narration. WCAG 2.1 SC 1.2.2 applies to all prerecorded audio. Fix: catalogue audit step that enumerates audio-feedback objects per project.
  5. VR / 360° slide audio uncaptioned. The VR project type's slide-level audio gets missed in retrofits because the captioning surface looks unfamiliar. Fix: the VR slide-audio caption mechanism is the same slide-audio Closed Caption surface; the retrofit checklist must include it.

The glossary-biased workflow for Captivate-authored content

  1. Pull the customer's controlled vocabulary upstream of the Captivate project. The customer's UI-element register, EHR / ERP / CRM module names, internal-acronym register, regulatory citations are the project glossary. For software-simulation projects, an additional UI-element register pulled from the production system's localisation files is the highest-leverage glossary input.
  2. Caption the narration audio before importing into Captivate. Generate clean SRT or VTT with the project glossary applied; bring the audio plus caption track into Captivate; auto-import caption-line timing into the slide-audio Closed Caption object. This avoids the per-line manual entry that is otherwise the dominant time cost in Captivate captioning.
  3. Glossary-aware pass on software-simulation auto-captions. The software-simulation captions are author-recorded UI-element labels, not ASR transcripts. The pass is faster — typically a per-project review against the UI-element register — but skipping it produces the most common Captivate audit finding.
  4. SME / clinical / engineering reviewer pass. For high-audit-relevance content (SOX compliance, HIPAA, clinical, federal-contractor mandatory training), a domain-expert reviewer pass is non-negotiable. The amber-highlight UI shows every glossary-applied term in context with source-line provenance.
  5. Enable Show Closed Captions before publish. Project-level Closed Captions skin setting enabled. Per-project verification step on the published artefact: open in the player, verify captions render.
  6. Caption-line-length check at responsive breakpoints. Test the published HTML5 output at all configured breakpoints. Fix wrap-length issues at the source caption track.
  7. Document captioning provenance per project. Caption source (vendor + glossary version), reviewer name and role, review date, glossary term count, project-level Closed Captions setting verified — five fields per project. Lives in the per-asset metadata of the LMS where the published artefact lands.

See pricing

Captivate-specific captioning RFP questions

Procurement teams running a captioning RFP for a Captivate-authored catalogue will want to ask several Captivate-specific questions. From our captioning RFP template:

How Captivate captions intersect Section 508, ADA Title II, EAA, and OFCCP flow-down

Captivate-authored content typically faces several accessibility regimes:

The technical caption requirement at WCAG SC 1.2.2 (Captions, Prerecorded) is consistent across regimes; Captivate's publish-target compatibility means the caption track rides into the LMS and onward to the learner. The captioning provenance log per project is the audit-evidence shape.

Related questions

Does the all-new Captivate (vs Captivate Classic) change the captioning surface?

The all-new Captivate is a re-architected authoring environment with a different project file format and a streamlined UI; the captioning surfaces (slide-audio Closed Caption object, video sidecar import, software-simulation auto-captions, project-level Show Closed Captions) are present in both, with the all-new Captivate exposing them more prominently. The glossary-biased upstream workflow is unchanged. Customers running mixed Classic + all-new portfolios should confirm caption-track import behaviour on both.

Can I import an SRT file directly into Captivate's Closed Caption slide-object timing?

The slide-audio Closed Caption object accepts caption-line timing imported from external sources via the project's caption-import path; the cleanest workflow is to deliver SRT or WebVTT alongside the audio, then pull the timing into Captivate. Per-line manual entry is the fallback when the workflow doesn't include an upstream caption-track delivery.

Is the software-simulation auto-caption text the same as a closed caption for accessibility?

Not quite. The auto-caption text in software-simulation mode is an on-screen instructional label describing the recorded action; it doubles as accessibility caption when the project's Closed Captions skin setting is enabled, but the wording is action-focused rather than audio-transcribed. For a software simulation that also carries narration audio, both surfaces need to be captioned — the action labels via the simulation surface, the narration audio via the slide-audio Closed Caption object.

Does Captivate support multi-language caption tracks on the same slide audio?

Captivate supports localised projects (one project per language) more naturally than multi-track captions on a single slide-audio object. The multi-language deployment pattern is a parent project with localised child projects; each child project has its own caption tracks. For LMS deployment, the language-specific child project is published and uploaded as a separate course in the LMS.

What about Adobe Captivate Prime (now Adobe Learning Manager)?

Adobe Learning Manager (ALM, formerly Captivate Prime) is Adobe's LMS, distinct from the Captivate authoring tool. ALM accepts SCORM 1.2 / SCORM 2004 / xAPI / AICC content packages with embedded captions; the captioning workflow is at the authoring layer, not the LMS layer. The ALM player surfaces the captions from the published artefact.

How much time does the glossary-biased workflow save on a Captivate catalogue retrofit?

The dominant time cost on a Captivate retrofit is the per-line manual caption-text correction at the slide-audio Closed Caption object level — a 30-minute course can take 60 to 90 minutes to caption manually, with 20 to 40 minutes of that on proper-noun mangling correction. Glossary-biased upstream captioning collapses the proper-noun correction to near-zero; the dominant remaining cost is the project-level QA verification (Show Closed Captions setting, breakpoint checks, audio-feedback enumeration) which is consistent across the catalogue.

Further reading