Platform reference · Cornerstone OnDemand

Cornerstone OnDemand captions: enterprise LMS captioning that survives audit

Cornerstone OnDemand (now Cornerstone, post-Saba acquisition and the rebrand) is the enterprise LMS that underpins L&D at the bulk of the Fortune 1000 — manufacturing, financial services, federal contractors, large healthcare systems, multinational SaaS, the Big-4 consultancies. Where TalentLMS, Docebo, and Absorb live in the SMB-and-mid-market range, Cornerstone is what L&D operations leads at 5,000-to-100,000-employee organisations are running. The captioning question on Cornerstone is not whether the platform supports captions — it does, with a content-asset-level caption-track-upload model, SCORM 1.2 / SCORM 2004 / xAPI / AICC publish-target compatibility, and multi-language support — but whether the captions survive the audit lens that comes with enterprise scale: OFCCP for federal contractors, the Section 508 procurement-evidence review, the European Accessibility Act mandatory-training inspection, the OCR HIPAA workforce-training file review for healthcare tenants. At enterprise scale, the captioning provenance is part of the audit evidence, and the upstream glossary-biased workflow is what makes the provenance log defensible.

TL;DR

Cornerstone supports caption-track upload at the content-asset level via SRT and WebVTT for direct video assets in the Content Library, plus full SCORM 1.2 / SCORM 2004 / xAPI / AICC ingestion for content authored in Articulate Storyline, Articulate Rise 360, Adobe Captivate, Lectora, and iSpring with embedded captions in the published package. Multi-language caption tracks are supported on a single video asset via the language-tagged caption-track upload pattern. The technical caption upload is straightforward; the difficulty is upstream — at enterprise scale, the caption-vocabulary surface is dense (drug names, SDK terms, regulatory citations, product names, customer-confidential identifiers, internal acronym registers, multi-jurisdictional regulatory references) and ASR-generated captions mangle systematically. Glossary-biased captioning with the enterprise's controlled vocabulary as the project glossary is what produces caption files clean enough to satisfy the audit lens. The provenance log per asset is the audit-evidence shape.

What Cornerstone is, and where in the workflow captioning lands

Cornerstone OnDemand is a multi-product enterprise talent-management platform; the Learning module is what L&D operations leads care about in the captioning conversation. Distinguishing characteristics:

Captioning lands at three points: (1) direct video upload into the Content Library, with a sidecar caption file uploaded alongside; (2) SCORM/xAPI/AICC content packaging where captions are embedded in the publish artefact; (3) external-hosted video referenced from a Content Library entry, where captions live with the host (Vimeo, Wistia, Kaltura, Panopto).

The Cornerstone caption-upload mechanic

The vocabulary surface at enterprise scale

Cornerstone tenants concentrate every vocabulary surface we measure. Common patterns at the 5,000-to-100,000-employee scale:

The audit-evidence shape at enterprise scale

Cornerstone tenants face a denser audit calendar than mid-market LMS tenants:

The captioning provenance log per asset — caption source (vendor + glossary version), reviewer name and role, review date, glossary term count — is the audit-evidence shape. Cornerstone's per-asset metadata fields permit this log to live in the LMS rather than in a parallel spreadsheet, which is the operational reason to land the caption-provenance metadata at the LMS layer.

The glossary-biased workflow at enterprise scale

  1. Pull the enterprise's controlled vocabulary across business units. The federation problem at enterprise scale is real: a multinational SaaS has different feature catalogues per division; a multinational healthcare system has different drug formularies per country; a multinational manufacturer has different equipment catalogues per region. The glossary-biased workflow needs OU-aware vocabulary management — the right glossary for the right asset.
  2. Establish per-OU caption-language defaults. Cornerstone's OU-scoped configuration handles this; the captioning workflow needs a corresponding per-OU language and glossary mapping.
  3. Process the back-catalogue first. At enterprise scale, the back-catalogue is typically thousands of training assets. The retrofit pattern: enumerate via the Cornerstone Reporting layer, prioritise by training-frequency-times-audit-risk, batch through the captioning workflow with the correct OU-scoped glossary, replace caption tracks via the asset-management API.
  4. SME / clinical / engineering reviewer pass. The reviewer step is non-optional at enterprise audit-relevance. The amber-highlight UI shows every glossary-applied term in context with source-line provenance. The reviewer is OU-scoped — the engineering reviewer for engineering content, the clinical reviewer for clinical content, the legal-compliance reviewer for compliance content.
  5. Upload caption tracks via the asset-management API. For sub-thousand-asset deployments, the Cornerstone admin UI is fine. For larger retrofits, the asset-management API path is the operational answer; the SCORM/xAPI re-ingestion path handles authored content.
  6. Document captioning provenance per asset in Cornerstone's per-asset metadata. Custom asset-fields capture caption source, glossary version, reviewer, review date, glossary term count. The Reporting layer can produce the audit-evidence packet on demand.

See pricing

Cornerstone-specific notes for the captioning RFP

Enterprise procurement teams running a captioning RFP for a Cornerstone-hosted training catalogue will expect the responses to address several Cornerstone-specific surfaces. From our captioning RFP template, the questions that have a Cornerstone-specific shading:

How Cornerstone captions intersect ADA Title II / III, Section 508, EAA, and OFCCP flow-down

Cornerstone tenants typically face several accessibility regimes simultaneously:

The technical caption requirement at WCAG SC 1.2.2 is consistent across regimes; the audit mechanism differs. The captioning provenance log per asset, plus the embedded-or-uploaded caption track, satisfies the technical bar in all of them simultaneously.

Related questions

Does Cornerstone have an in-platform auto-caption feature?

Cornerstone has invested in auto-captioning capabilities through its content-asset workflow tools, with ASR generation available for direct video uploads. The same generic-ASR limitation applies — auto-generated captions on enterprise content with high proper-noun density mangle predictably. The path that holds up at audit is upstream glossary-biased captioning, with the clean caption track uploaded into the Content Library; relying on the in-platform auto-caption alone is the failure mode.

How does Cornerstone differ from Workday Learning for captioning?

Workday Learning is the LMS option for Workday-HCM-running organisations; the captioning surface (caption-track upload alongside video assets, SCORM/xAPI ingestion for authored content) is similar in shape but less mature in tooling. Cornerstone has the deeper authoring-tool ecosystem and the mature Content Library asset model; Workday Learning has tighter Workday-platform integration. The upstream glossary-biased captioning workflow is identical for both.

What about Saba content that pre-dates the Cornerstone acquisition?

Saba content migrated to Cornerstone retains its caption tracks where the source asset had them; Saba-authored content without captions ages into the Cornerstone tenant as a back-catalogue retrofit candidate. The retrofit pattern is the same as for any Cornerstone back-catalogue.

Does Cornerstone support automated captioning provenance metadata at the asset level?

Cornerstone supports custom asset metadata fields, configurable per OU. The captioning provenance log (vendor, glossary version, reviewer, date, glossary term count) maps to custom fields with no platform restriction. The operational concern is to define the field schema once at deployment level, not per asset.

What's the practical batch size for a Cornerstone back-catalogue retrofit?

The asset-management API rate limits and the captioning vendor's batch throughput are the practical constraints, not Cornerstone-side limits on caption-track upload. A typical large-enterprise retrofit (3,000-10,000 training assets) runs over 6-12 weeks with parallel reviewer pipelines, with prioritisation by training-frequency-times-audit-risk to land the audit-relevant assets first.

Further reading