Authoring tool reference · Adobe Captivate
Adobe Captivate captions: closed-caption slide objects, simulation captions, responsive publish
Adobe Captivate is the third major rapid-authoring tool used in corporate L&D after Articulate Storyline and Camtasia. Where Storyline owns the slide-and-trigger paradigm and Camtasia owns the timeline-based screen-recording paradigm, Captivate is the tool L&D teams reach for when the catalogue is dominated by software simulations, responsive multi-device output, and complex interaction logic that needs more than Storyline's trigger system. The captioning surface is materially different from both: Captivate has a first-class Closed Caption slide-object type with its own timeline, plus closed-captions on software-simulation recording, plus the option to import an external WebVTT track, plus a responsive HTML5 publish that needs the captions to survive the Fluid Boxes / Liquid Layout breakpoint pivot. Glossary-biased captioning at the source — before the Captivate project picks up the audio — is the workflow that produces caption files clean enough to satisfy Section 508, ADA Title II, the European Accessibility Act, and the OFCCP procurement-evidence review on a software-simulation-heavy training catalogue.
TL;DR
Adobe Captivate supports captions in three distinct surfaces: (1) Closed Caption slide-object attached to slide audio with its own per-line timing; (2) software-simulation closed captions generated automatically from the recording's mouse and keystroke actions, fully editable; (3) WebVTT or SRT import on slide-level video. Publish output is HTML5 (responsive via Fluid Boxes / Liquid Layout for multi-device delivery) or SCORM 1.2 / SCORM 2004 / xAPI / PDF; captions ride inside the publish package and survive ingestion into TalentLMS, Docebo, Absorb, Cornerstone OnDemand, and Healthstream. The captioning failure mode is the same as in every other tool — generic ASR mangles SDK names, drug names, regulatory citations, internal acronyms — and the upstream answer is the same: glossary-biased captioning before the audio enters the Captivate project. The Captivate-specific add: software-simulation closed captions need a glossary-aware pass on the auto-generated mouse-and-keystroke labels to fix UI-element proper-noun mangling.
What Captivate is, and where in the workflow captioning lands
Adobe Captivate (currently shipping as the all-new Captivate, succeeding the long-running classic Captivate line) is Adobe's flagship rapid-authoring tool for L&D. The captioning-relevant characteristics:
- Slide-and-timeline hybrid. Captivate slides have a per-slide timeline (objects animate over time within a slide) plus the project-level slide flow. Audio attaches to slides; captions attach to the audio.
- Software-simulation recording mode. Captivate's distinguishing capability is recording on-screen software actions (clicks, typing, menu selections) into editable slides with auto-generated mouse paths, click highlights, and text captions. The text captions in this mode are not the closed captions for accessibility — they are the on-screen instructional labels — but Captivate does also generate closed-caption text from these labels, which is one of the surfaces a glossary-biased workflow has to clean.
- Responsive Fluid Boxes / Liquid Layout publish. Captivate's responsive output adjusts slide layout per breakpoint (desktop, tablet, mobile). Caption display has to survive the breakpoint pivot — caption rendering should remain readable at small breakpoints. The Captivate player handles this, but caption-line length matters more in Captivate than in fixed-width tools.
- Multi-publish path. HTML5 (responsive), SCORM 1.2, SCORM 2004, xAPI (Tin Can), PDF (less common), Adobe Connect. The captions ride inside HTML5 / SCORM / xAPI publish artefacts.
- VR project type. Captivate supports 360° / VR slides with hotspots; captioning on VR slide-level audio uses the same slide-audio caption mechanism.
- Quizzing layer. Question slides have audio-feedback options; the audio feedback should also be captioned for accessibility, often missed in audit reviews.
Captioning lands at four points in a Captivate project: (1) slide audio (most common — narration over instructional content); (2) software-simulation auto-generated captions (the recorded actions); (3) imported video on a slide (with WebVTT / SRT sidecar); (4) audio feedback on quiz / interaction objects.
The Captivate caption-upload mechanic
- Closed Caption slide-object on slide audio. The Properties panel for slide audio exposes a Closed Captioning button that opens a per-line caption editor. Each line has a start-time, end-time, and text. Lines are typed by author, or auto-populated from a script if the audio was generated from text-to-speech inside Captivate. The caption-line UI is per-slide; long projects get repetitive.
- WebVTT / SRT import on slide-level video. Where the slide hosts an imported video file, Captivate accepts WebVTT or SRT as a sidecar. The import dialog maps caption tracks onto the video timeline.
- Software-simulation auto-generated captions. When recording a software simulation, Captivate auto-generates caption text from each captured action (e.g., "Click the File menu," "Type your password"). These are editable per slide. They are not by default exposed as closed captions to the screen reader — they are visible on-screen — but Captivate has a Show Closed Captions setting on the project that promotes them. The proper-noun mangling here is on the UI-element names captured during recording: a generic recorder labels "Click the Salesforce Lightning App Builder" as "Click the Salesforce App Builder" or worse. The glossary-aware pass fixes the UI-element register.
- Audio-feedback object captions. Question slides and interactive objects can play audio feedback. These audio clips also accept the Closed Caption object. Frequently missed in catalogue audits because the feedback audio is short and not always reviewed.
- Project-level Closed Captions setting. The Project > Skin Editor > Borders > Show Closed Captions setting exposes captions in the published player. Without this enabled, the captions exist in the project but are not surfaced to the learner. A common audit-finding pattern: captions authored, project setting not enabled, runtime captions absent.
The vocabulary surface in Captivate-authored content
Captivate's strength — software simulation, responsive output, complex interactions — concentrates the captioning vocabulary surface in distinctive ways:
- Software-simulation UI-element names. The dominant Captivate use case in corporate L&D is system-rollout training: training the workforce on a new ERP, a new CRM, a new EHR, a new HRIS. The UI-element vocabulary is dense — menu names, button labels, field names, screen titles, page-section names. Generic ASR or generic-labeller mangling on the UI-element register is the most common Captivate-specific caption failure. Examples: SAP S/4HANA modules (FI / CO / SD / MM / PP / WM / QM), Workday object names (Worker, Position, Compensation Element), Salesforce Lightning components (App Page, Record Page, Home Page, Utility Bar, Lightning Web Component).
- Software-procedure verbs. Software-training caption tracks carry a much higher density of imperative-mood verbs ("Click," "Drag," "Right-click," "Hover," "Press Ctrl+Shift+P") than narrative training. Generic ASR sometimes misclassifies them as proper nouns.
- Hotkey and command-line vocabulary. Engineering and IT-systems training in Captivate carries hotkey sequences ("Ctrl+Shift+P," "Cmd+Option+I") and command-line invocations. See engineering onboarding captions for the SDK and command-line surface.
- Healthcare-EHR vocabulary. Captivate is heavily used for Epic, Cerner, Meditech, Allscripts EHR-rollout training. The EHR vocabulary surface is dense — module names, navigator names, activity names, ordering-template names — and misalignment with the customer's local registry is the #1 caption failure pattern. See medical training captions and Healthstream captions.
- Regulatory citations on compliance simulations. Captivate-authored compliance simulations (the dominant SOX-control-walkthrough format) carry CFR / USC / EU regulation citations. See compliance training captions.
- Internal-acronym register. Every customer's internal acronym register applies — programme names, division names, system codes, role abbreviations. The customer's controlled glossary is the source of truth.
- Audio-feedback wording. Quiz audio-feedback messages tend to use a smaller vocabulary than narration but their accuracy is high-stakes (a feedback message saying "Incorrect" mis-captioned as "Correct" is a learning failure, not a cosmetic one).
The Captivate-specific failure modes
The five caption-related findings most likely to surface during a 508 audit, an OFCCP review, an EAA inspection, or an OCR HIPAA workforce-training file review on a Captivate-authored catalogue:
- Closed Captions authored, Show Closed Captions setting not enabled. The captions exist in the source project but the published player does not surface them. The auditor opens the published course, the captions are absent, the finding lands. Fix: project-level Show Closed Captions setting enabled, plus a per-project verification step before publish.
- Software-simulation auto-captions left as defaults. The recorded software-action labels carry generic UI-element wording that mangles the customer's actual UI vocabulary. A Salesforce-training catalogue with auto-captions reading "Click the Salesforce App Builder" instead of "Click the Salesforce Lightning App Builder" is a fidelity gap. The auditor's spot check is "does the caption match the screen?" — when it doesn't, the finding lands. Fix: glossary-aware review pass on every software-simulation slide.
- Responsive breakpoint truncation. Long caption lines wrap awkwardly at small breakpoints in Fluid Boxes / Liquid Layout output. Auditors testing on tablet / mobile breakpoints catch the readability failure. Fix: caption-line-length budgeting (~32 characters per line for mobile-first responsive output).
- Audio-feedback object captions absent. Quiz feedback and interactive-object feedback audio is captioned less consistently than narration. WCAG 2.1 SC 1.2.2 applies to all prerecorded audio. Fix: catalogue audit step that enumerates audio-feedback objects per project.
- VR / 360° slide audio uncaptioned. The VR project type's slide-level audio gets missed in retrofits because the captioning surface looks unfamiliar. Fix: the VR slide-audio caption mechanism is the same slide-audio Closed Caption surface; the retrofit checklist must include it.
The glossary-biased workflow for Captivate-authored content
- Pull the customer's controlled vocabulary upstream of the Captivate project. The customer's UI-element register, EHR / ERP / CRM module names, internal-acronym register, regulatory citations are the project glossary. For software-simulation projects, an additional UI-element register pulled from the production system's localisation files is the highest-leverage glossary input.
- Caption the narration audio before importing into Captivate. Generate clean SRT or VTT with the project glossary applied; bring the audio plus caption track into Captivate; auto-import caption-line timing into the slide-audio Closed Caption object. This avoids the per-line manual entry that is otherwise the dominant time cost in Captivate captioning.
- Glossary-aware pass on software-simulation auto-captions. The software-simulation captions are author-recorded UI-element labels, not ASR transcripts. The pass is faster — typically a per-project review against the UI-element register — but skipping it produces the most common Captivate audit finding.
- SME / clinical / engineering reviewer pass. For high-audit-relevance content (SOX compliance, HIPAA, clinical, federal-contractor mandatory training), a domain-expert reviewer pass is non-negotiable. The amber-highlight UI shows every glossary-applied term in context with source-line provenance.
- Enable Show Closed Captions before publish. Project-level Closed Captions skin setting enabled. Per-project verification step on the published artefact: open in the player, verify captions render.
- Caption-line-length check at responsive breakpoints. Test the published HTML5 output at all configured breakpoints. Fix wrap-length issues at the source caption track.
- Document captioning provenance per project. Caption source (vendor + glossary version), reviewer name and role, review date, glossary term count, project-level Closed Captions setting verified — five fields per project. Lives in the per-asset metadata of the LMS where the published artefact lands.
Captivate-specific captioning RFP questions
Procurement teams running a captioning RFP for a Captivate-authored catalogue will want to ask several Captivate-specific questions. From our captioning RFP template:
- Software-simulation glossary support. Does the vendor handle the UI-element register for the customer's production systems (Salesforce, Workday, SAP, Epic, etc.)? Can the vendor ingest a UI-element list from the system's localisation files?
- Slide-audio caption-line timing import. Does the vendor deliver caption tracks in a format that imports cleanly into Captivate's per-slide Closed Caption object, with line timings preserved? Or is the vendor's deliverable per-project SRT files only, requiring manual re-entry?
- Responsive caption-line-length budgeting. Does the vendor's deliverable respect responsive breakpoint constraints? Caption lines need a per-breakpoint readable length.
- Audio-feedback object coverage. Does the vendor enumerate every audio-bearing object in the project (slide audio, audio feedback, interactive-object feedback) or only narration audio?
- VR / 360° slide audio coverage. Does the vendor handle the VR project type's slide-level audio?
- Project-level Show Closed Captions verification. Does the vendor's QA include the project-level Closed Captions skin setting verification, or only the source caption track?
How Captivate captions intersect Section 508, ADA Title II, EAA, and OFCCP flow-down
Captivate-authored content typically faces several accessibility regimes:
- Section 508 — federal contractors and federal-agency-deployed Captivate catalogues face the technical bar at WCAG 2.0 AA (with the Refresh of 2017 raising the floor). The Captivate publish output plus the embedded caption tracks satisfy the technical requirement; the procurement evidence is the captioning provenance log per project.
- Section 504 — federal-financial-assistance-recipient Captivate catalogues (universities, hospitals, large non-profits) face the functional-access standard.
- ADA Title II — state and local government Captivate-authored mandatory training (state-employee training catalogues, county-government training, public-university HR training) carries the 2026-04-24 WCAG 2.1 AA bar.
- ADA Title III — private-sector Captivate-authored content faces the indirect technical bar through case-law evolution.
- European Accessibility Act — EU-operating Captivate-authored content in scope (B2C surfaces) faces EN 301 549 / WCAG 2.1 AA. See our EAA Q3 2026 enforcement landscape post.
- AODA — Ontario-operating Captivate catalogues face IASR § 14 WCAG 2.0 AA.
- OSHA / MSHA / safety-training — see safety training captions.
- HIPAA workforce-training file review — see HIPAA training captions.
The technical caption requirement at WCAG SC 1.2.2 (Captions, Prerecorded) is consistent across regimes; Captivate's publish-target compatibility means the caption track rides into the LMS and onward to the learner. The captioning provenance log per project is the audit-evidence shape.
Related questions
Does the all-new Captivate (vs Captivate Classic) change the captioning surface?
The all-new Captivate is a re-architected authoring environment with a different project file format and a streamlined UI; the captioning surfaces (slide-audio Closed Caption object, video sidecar import, software-simulation auto-captions, project-level Show Closed Captions) are present in both, with the all-new Captivate exposing them more prominently. The glossary-biased upstream workflow is unchanged. Customers running mixed Classic + all-new portfolios should confirm caption-track import behaviour on both.
Can I import an SRT file directly into Captivate's Closed Caption slide-object timing?
The slide-audio Closed Caption object accepts caption-line timing imported from external sources via the project's caption-import path; the cleanest workflow is to deliver SRT or WebVTT alongside the audio, then pull the timing into Captivate. Per-line manual entry is the fallback when the workflow doesn't include an upstream caption-track delivery.
Is the software-simulation auto-caption text the same as a closed caption for accessibility?
Not quite. The auto-caption text in software-simulation mode is an on-screen instructional label describing the recorded action; it doubles as accessibility caption when the project's Closed Captions skin setting is enabled, but the wording is action-focused rather than audio-transcribed. For a software simulation that also carries narration audio, both surfaces need to be captioned — the action labels via the simulation surface, the narration audio via the slide-audio Closed Caption object.
Does Captivate support multi-language caption tracks on the same slide audio?
Captivate supports localised projects (one project per language) more naturally than multi-track captions on a single slide-audio object. The multi-language deployment pattern is a parent project with localised child projects; each child project has its own caption tracks. For LMS deployment, the language-specific child project is published and uploaded as a separate course in the LMS.
What about Adobe Captivate Prime (now Adobe Learning Manager)?
Adobe Learning Manager (ALM, formerly Captivate Prime) is Adobe's LMS, distinct from the Captivate authoring tool. ALM accepts SCORM 1.2 / SCORM 2004 / xAPI / AICC content packages with embedded captions; the captioning workflow is at the authoring layer, not the LMS layer. The ALM player surfaces the captions from the published artefact.
How much time does the glossary-biased workflow save on a Captivate catalogue retrofit?
The dominant time cost on a Captivate retrofit is the per-line manual caption-text correction at the slide-audio Closed Caption object level — a 30-minute course can take 60 to 90 minutes to caption manually, with 20 to 40 minutes of that on proper-noun mangling correction. Glossary-biased upstream captioning collapses the proper-noun correction to near-zero; the dominant remaining cost is the project-level QA verification (Show Closed Captions setting, breakpoint checks, audio-feedback enumeration) which is consistent across the catalogue.
Further reading
- Articulate Storyline captions reference
- Articulate Rise captions reference
- Camtasia captions reference
- Cornerstone OnDemand captions reference
- TalentLMS captions reference
- Docebo captions reference
- Absorb LMS captions reference
- Healthstream captions reference
- Section 508 captions: federal contractor flow-down
- Engineering onboarding captions
- Medical training captions
- Captioning RFP template — 14 questions to ask any vendor