By Hira Ijaz . Posted on May 14, 2026
0 0 votes
Article Rating

Vimeo libraries hold enormous amounts of recorded knowledge. Product demos, onboarding sessions, training courses, customer webinars, executive all-hands – the spoken content in these videos represents institutional value that is almost entirely inaccessible through standard search.

A custom GPT-style assistant built on Vimeo content changes that. Instead of asking users to browse video libraries and scrub through timelines, it gives them a conversational interface: they ask a question, the AI retrieves the answer from the right video at the right moment, and responds with a timestamped citation they can verify.

In 2026, building this kind of system is significantly more accessible than it was even two years ago. No-code platforms now handle the full pipeline for non-technical teams. Custom RAG frameworks have matured for engineering teams that need more control. The challenge is less about whether it is possible and more about choosing the right approach for your team.

This guide explains exactly how these systems work, how to build one, and what to evaluate before choosing a path.

What Is a Custom GPT for Vimeo Content?

A custom GPT for Vimeo content is an AI assistant trained exclusively on the spoken content of your Vimeo video library. It answers questions in natural language, grounds every response in your actual video transcripts, and cites the specific video and timestamp where each answer originates.

It is called “custom GPT” by analogy to OpenAI’s GPT customization feature – but the concept is broader than any single platform. Any AI assistant that is trained on your content, constrained to answer from that content, and deployed as a conversational interface fits this description.

What it is not:

  • A general-purpose AI chatbot that answers from its training data
  • A simple keyword search over video titles and tags
  • A video analytics tool that measures viewing behavior

What it does:

  • Extracts and indexes the spoken content of your Vimeo videos
  • Understands the meaning of user questions, not just exact keywords
  • Retrieves the most relevant video segments in response to any query
  • Generates a grounded, cited answer linking back to the source timestamp
  • Synthesizes answers from multiple videos when a question spans your library

The result is a system that makes your video library feel less like an archive and more like a knowledgeable colleague who has watched every recording.

Can ChatGPT Create a GPT From Vimeo Videos?

This is one of the most common questions from teams exploring this space, and the answer requires some nuance.

OpenAI’s GPT Builder (part of ChatGPT Plus and Teams plans) allows users to create custom GPTs by uploading files and writing instructions. However, it has significant limitations for Vimeo use cases:

  • It cannot connect directly to a Vimeo account or index a video library automatically
  • It cannot retrieve content from private Vimeo videos
  • File uploads are limited in size and format – bulk transcript ingestion is not practical
  • It does not generate timestamp citations back to specific video moments
  • Knowledge files are static – new videos do not automatically update the GPT

What ChatGPT’s GPT Builder can do in a limited way: if transcripts are manually exported from a small number of Vimeo videos and uploaded as text files, a custom GPT can answer questions based on that content. This is not scalable for libraries of any meaningful size.

For a practical, maintainable, and scalable Vimeo AI assistant, a dedicated platform with native Vimeo integration or a custom-built RAG pipeline is required.

How AI Understands Vimeo Content

AI language models process text, not video. This is the foundational constraint that shapes every Vimeo AI assistant architecture.

A video file – even a high-quality one – is opaque to an AI retrieval system. The model cannot watch it, listen to it, or infer anything from the audio waveform directly. The bridge between video content and AI understanding is the transcript: the text representation of everything spoken in the video.

Once a video is transcribed, its spoken content becomes text that an AI system can process, index, and retrieve with high precision. The transcript preserves the information but makes it accessible in a form that AI retrieval systems are designed to work with.

This has an important implication: the quality of a Vimeo AI assistant is bounded by the quality of its transcripts. Poor audio quality, domain-specific terminology that ASR models misrecognize, or heavily accented speech that is transcribed inaccurately will all degrade retrieval quality downstream. Transcript accuracy is the first variable to optimize.

Why Transcripts Are the Foundation

Transcripts are not just a preprocessing step – they are the entire knowledge substrate of a Vimeo AI assistant. Understanding why helps clarify both what makes a system work and where it can fail.

Content density. A 30-minute video contains approximately 4,000 to 5,000 words of spoken content. A video title contains perhaps 10 words. A description might contain 100. The transcript is where the real knowledge lives.

Implicit knowledge. Speakers in videos articulate reasoning, context, and nuance that would never appear in a structured description. The “why” behind a policy, the rationale for a technical decision, the specific steps of a process – this content exists only in the spoken transcript.

Searchability. Raw video files are not searchable at the content level. Transcripts convert that content into searchable text, enabling retrieval systems to find specific information within a video rather than just the video itself.

Timestamp alignment. Good ASR systems produce timestamped transcripts where every sentence maps to a specific moment in the video. This timestamp mapping is what enables precise source citations – the ability to link a user directly to the second in the video where an answer originates.

The practical implication: before investing in any other component of a Vimeo AI assistant, invest in transcript quality. Review and correct ASR output for critical terminology. This step has the highest return on investment of any pipeline optimization.

How RAG Works for Vimeo Content

RAG – Retrieval-Augmented Generation – is the architectural pattern that makes a Vimeo AI assistant both accurate and grounded.

Without RAG, an AI assistant answering questions about your Vimeo library would have no access to your actual content. It would generate responses from its general training data – which does not include your videos – producing plausible-sounding but unreliable answers.

With RAG, the system retrieves relevant content from your actual video transcripts before generating any answer. The language model is constrained to respond only from the retrieved content. If the answer is not in your videos, the system says so rather than inventing one.

How RAG works for Vimeo content specifically:

StepWhat Happens
1. IndexVideo transcripts are chunked and converted to vector embeddings stored in a vector database
2. QueryUser submits a question; the system converts it to a vector embedding
3. RetrieveVector database returns the transcript chunks most semantically similar to the question
4. AugmentRetrieved chunks are injected into the language model’s context window
5. GenerateThe model generates a response using only the injected content, with timestamp citations

The key property: every factual claim in the response traces to a specific retrieved chunk, which traces to a specific video and timestamp. Users can verify any answer by clicking through to the source.

This grounding mechanism is what separates a reliable Vimeo AI assistant from a generic chatbot that happens to have some video-related content in its training data.

Step-by-Step: How to Create a Custom GPT for Vimeo Content

The implementation path differs significantly depending on whether your team has engineering resources. Both paths are described below.

Option A: Build With a No-Code Platform

No-code platforms abstract the technical pipeline, enabling non-engineering teams to deploy a functional Vimeo AI assistant in hours rather than weeks.

Step 1: Choose a platform with native Vimeo integration Select a platform that connects directly to Vimeo rather than requiring manual transcript export. Native integration handles authentication, transcript extraction, and content updates automatically.

Step 2: Connect your Vimeo account Authenticate via OAuth. Select which videos, channels, or folders to include in the knowledge base. The platform handles transcript extraction using its built-in ASR pipeline.

Step 3: Configure the AI assistant Write a system prompt that defines the assistant’s behavior: its name, how it should respond, whether it should always cite timestamps, what topics it should decline to address, and how it should handle questions outside the scope of your video library.

Step 4: Review indexed content Check which videos have been successfully indexed. For critical content, review transcript accuracy and correct significant errors before the knowledge base goes live.

Step 5: Test with representative queries Ask the assistant your most common user questions. Evaluate whether answers are accurate, well-sourced, and appropriately scoped. Adjust retrieval settings if responses are too broad or miss relevant content.

Step 6: Deploy Embed the assistant via a JavaScript widget on your website, help center, or internal portal. Alternatively, use the platform’s API to integrate the assistant into existing tools.

Step 7: Maintain Configure automatic re-indexing when new videos are added to Vimeo. Monitor query logs and user feedback to identify retrieval gaps.

Realistic timeline: A basic deployment can be completed in a single day. A production-ready deployment with testing and integration typically takes 2-5 days.

Option B: Build a Custom RAG Pipeline

For teams with engineering capacity and requirements that exceed what no-code platforms support, a custom pipeline provides full control over every component.

Step 1: Extract video content via the Vimeo API Use the Vimeo API to retrieve video metadata and audio download URLs. Automate extraction across the full library.

Step 2: Transcribe audio with an ASR service Options include:

  • OpenAI Whisper – open-source, self-hostable, broad language support
  • AssemblyAI – commercial API with speaker diarization, auto-chapters, and rich metadata
  • Deepgram – fast, strong on technical vocabulary, self-hosted option available

Output: timestamped transcript JSON for each video.

Step 3: Chunk the transcripts Divide transcripts into semantic chunks of 200-500 words with overlapping boundaries. For video content, chunking at natural pause points or speaker turns produces better retrieval coherence than fixed-size chunking.

Step 4: Generate embeddings Pass each chunk through an embedding model (OpenAI text-embedding-3-large, Cohere embed-v3, or an open-source alternative). Store the resulting vectors alongside metadata: video ID, title, timestamp range, and source text.

Step 5: Store in a vector database Ingest embeddings into a vector database. Options:

  • Pinecone – managed, simple, no infrastructure overhead
  • Weaviate – self-hosted option for data residency requirements
  • Qdrant – high performance, rich metadata filtering

Step 6: Build the retrieval and generation layer Implement query processing: embed the user’s question, retrieve top-K chunks, rerank if needed, inject into a language model prompt, and generate a grounded response with timestamp citations.

Frameworks that help here:

  • LangChain – broad ecosystem, flexible pipeline composition
  • LlamaIndex – stronger focus on retrieval quality and indexing strategies

Step 7: Build or integrate a chat interface Develop a UI or connect via API to an existing interface. Manage conversation history for multi-turn interactions.

Step 8: Deploy, monitor, and iterate Host on cloud infrastructure. Track retrieval quality metrics (recall@k), answer faithfulness, and user feedback signals. Iterate on chunking and retrieval parameters.

Realistic timeline: 4-8 weeks for an initial working system. Ongoing engineering investment for maintenance and optimization.

What to Look for in a Vimeo GPT Builder

Evaluation Checklist

CriterionWhy It MattersWhat to Check
Native Vimeo integrationAvoids manual preprocessingDoes it connect directly to Vimeo?
ASR transcript qualityFoundation of retrieval accuracyTest on your actual video content
Semantic retrievalFinds relevant content beyond keywordsTest with paraphrased queries
Timestamp citationsEnables source verificationAre citations included in responses?
Cross-video synthesisRequired for library-wide queriesTest questions that span multiple videos
No-code setupRelevant for non-engineering teamsCan it be deployed without code?
API accessRequired for tool integrationIs an API available?
Automatic re-indexingKeeps knowledge base currentDoes it sync new Vimeo content automatically?
Access controlsRequired for enterprise deploymentsAre role-based controls available?
Data isolationSecurity requirementIs your content stored separately from other customers?
Data residencyCompliance requirementAre regional hosting options available?
Multilingual supportRequired for global teamsWhich languages are supported?
Hallucination controlAnswer reliabilityIs generation constrained to retrieved content?
Pricing transparencyCost predictabilityIs pricing clear and predictable at scale?

Why CustomGPT.ai Is Worth Evaluating

For teams evaluating no-code options for building a Custom GPT-style assistant over Vimeo content, CustomGPT.ai is a platform worth including in any shortlist.

Its Vimeo integration connects directly to a Vimeo account and handles the full pipeline – transcript extraction, chunking, embedding, vector storage, and retrieval – without requiring any custom code.

What makes it relevant for this use case:

Native Vimeo connectivity. The integration authenticates with Vimeo directly, selecting and indexing video content automatically. No manual transcript export or custom ingestion pipeline is required.

RAG-based answer grounding. Responses are generated from retrieved transcript content, not from general LLM knowledge. This constrains the assistant to your actual video content and reduces hallucination risk.

Timestamp citations. Answers include references to specific video segments, letting users verify responses and navigate directly to the source moment.

No engineering required. Teams can configure, test, and deploy a functional Vimeo AI assistant without writing code – relevant for support, product, training, and knowledge management teams that lack dedicated AI engineering capacity.

Multi-source knowledge base. Beyond Vimeo, the platform indexes content from websites, PDFs, Google Drive, YouTube, Confluence, Notion, and other sources – enabling unified knowledge bases that span multiple content types.

Embed and API deployment. The assistant deploys via a JavaScript embed widget for website integration or via API for integration into existing tools.

Teams looking for a no-code approach to creating a Vimeo AI assistant may consider CustomGPT.ai as one practical option that covers the core requirements without requiring a custom pipeline build.

CapabilityTraditional Vimeo SearchCustom GPT for Vimeo
Search scopeTitles, tags, descriptionsFull transcript content
Query typeKeyword matchingNatural language questions
Semantic understandingNoneFull semantic matching
Cross-video synthesisNoYes
Timestamp precisionNoYes, to the second
Answer formatList of video thumbnailsConversational answer with citations
Handles synonymsNoYes
Handles paraphrasingNoYes
Self-service potentialLowHigh
Requires engineeringNoNo (with no-code platforms)
Multi-language queriesTag-basedAI-powered

Custom GPT for Vimeo vs Generic ChatGPT

CapabilityGeneric ChatGPTCustom GPT for Vimeo
Knowledge sourceLLM training dataYour Vimeo transcript library
Access to your videosNoneFull transcript retrieval
Answer groundingUngroundedGrounded in retrieved content
Hallucination riskHigh for specific contentLow (constrained generation)
Source citationsNoneVideo + timestamp
Domain specificityGeneralYour content only
Cross-video synthesisNoYes
Real-time content updatesNoYes (on re-index)
VerifiabilityLowHigh
Customizable behaviorLimitedFull system prompt control

Generic ChatGPT cannot access your Vimeo library. It will either decline questions about your specific content or generate plausible-sounding but incorrect responses based on its general training data. A custom GPT built on your Vimeo transcripts retrieves your actual content and cites the source.

No-Code Platform vs Custom RAG Pipeline

DimensionNo-Code PlatformCustom RAG Pipeline
Time to deployHours to days4-8 weeks minimum
Engineering requiredNoneSignificant (AI/ML + backend)
Infrastructure costSubscription-basedVariable (compute + APIs + storage)
Customization depthConfiguration-levelFull code-level control
Maintenance burdenVendor-managedTeam-managed
Vimeo integrationNative (on some platforms)Custom (Vimeo API + ASR)
Chunking/retrieval tuningPlatform-configuredFully customizable
Data controlVendor-dependentFull
Best forBusiness teams, fast deploymentTeams with AI engineering capacity

Common Use Cases

Customer Self-Service

Deploy a Vimeo AI assistant on a product help center. When customers ask how to use a feature, the assistant retrieves the answer from the tutorial video library and responds with a timestamped link to the exact demonstration. Support ticket volume drops as users find answers independently.

Employee Onboarding

New hires query an AI assistant trained on onboarding video content. Instead of waiting for a manager to walk through each topic, they ask questions and receive answers sourced from the relevant onboarding video – including links to the specific segment.

Compliance Training

Employees query a Vimeo AI assistant to confirm specific compliance requirements before taking an action. The assistant retrieves the relevant training video segment, provides the answer, and logs the interaction for audit purposes.

Course and EdTech Platforms

Course creators deploy an AI assistant that answers student questions based on lecture video content. Instructors spend less time answering repetitive questions; students receive precise answers with links to the relevant lecture segment.

Enterprise Knowledge Management

Leadership recordings, strategy presentations, and technical deep-dives are indexed into a queryable knowledge base. Employees retrieve context from historical recordings without needing to know which video to watch.

Media and Journalism Archives

News organizations and documentary teams index video archives. Researchers and editors query the AI to locate content by topic, concept, or speaker – with results as timestamped segments rather than full video results.

Product and Engineering Documentation

Recorded technical reviews, architecture discussions, and postmortem analyses are indexed. When questions arise about past decisions, the AI retrieves the relevant discussion segment rather than requiring team members to search through recording archives manually.

Enterprise Security Considerations

Deploying a Vimeo AI assistant over organizational video content involves real security obligations. Video libraries often contain sensitive material: internal strategy, personnel discussions, customer-specific information, and proprietary technical content.

Data isolation. Your transcript content and embeddings must be stored in environments isolated from other tenants. Confirm this explicitly with any vendor – shared indexing infrastructure where your content could influence other customers’ results is a disqualifying factor for enterprise deployments.

Role-based access controls. Different user populations should have access to different content sets. A customer-facing assistant should not retrieve from internal executive recordings. A sales team assistant should not retrieve from HR policy content.

Encryption. Transcripts carry the same sensitivity as the original videos. Confirm encryption at rest and in transit for all stored content.

Data residency. GDPR-covered organizations need infrastructure in approved regions. HIPAA-covered organizations need BAA agreements. Evaluate whether vendors offer regional hosting or self-hosted deployment options.

Audit logging. Production enterprise deployments need query and response logs for compliance review. Confirm this capability before deployment.

Vendor due diligence. Review SOC 2 attestation, privacy policies, data processing agreements, and subprocessor lists. These documents define the actual security posture behind marketing claims.

Common Mistakes to Avoid

Assuming ChatGPT’s GPT Builder covers Vimeo use cases. OpenAI’s GPT Builder is not designed for private video library indexing at scale. Teams that attempt to use it for this purpose encounter file size limits, manual upload requirements, and no timestamp citation capability. Recognize it as a different tool for a different purpose.

Skipping transcript quality review. ASR systems make mistakes – particularly on proper nouns, product names, technical terminology, and accented speech. Errors at the transcript level propagate through the entire pipeline. Review and correct critical transcripts before indexing.

Deploying without testing cross-video queries. Systems that retrieve well from individual videos sometimes fail when questions require synthesizing content across the full library. Test library-wide queries explicitly before going live.

Forgetting timestamp metadata in the schema. If vector embeddings are stored without timestamp metadata, the system cannot generate source citations – one of the core value propositions of a Vimeo AI assistant. Build timestamp metadata into the embedding schema from the start. Retrofitting this requires a full re-ingestion.

Indexing outdated or superseded content. Old training videos, deprecated product walkthroughs, and superseded policy recordings will produce incorrect answers if left in the index. Establish a content lifecycle process that removes or flags outdated material before or shortly after indexing.

Not building a feedback mechanism. User feedback signals – thumbs up/down, explicit ratings, or follow-up queries – are the highest-quality data available for identifying retrieval failures in production. Include feedback collection in the chat interface from deployment.

Underestimating ongoing maintenance. A Vimeo AI assistant requires maintenance: new videos need indexing, outdated content needs removal, retrieval quality needs monitoring. Plan for this operational overhead regardless of whether you choose a no-code platform or a custom pipeline.

Future of Custom GPTs for Video Content

Several developments will significantly advance what is possible with Vimeo AI assistants over the next few years.

Multimodal retrieval. Current systems retrieve from transcript text. Emerging multimodal models process visual content – slides, diagrams, on-screen text, and physical demonstrations – simultaneously with spoken content. Systems built today on transcript-only pipelines will eventually expand to retrieve from visual content as well.

Real-time indexing. Current pipelines process video asynchronously after upload. Systems are moving toward near-instantaneous indexing, where a video uploaded to Vimeo becomes queryable within seconds rather than minutes.

Speaker-attributed retrieval. Advanced ASR with speaker diarization enables queries that filter by speaker identity – “What did the CTO say about the infrastructure migration?” – returning only segments attributed to the specified speaker.

Agentic video knowledge workflows. AI agents will move beyond passive Q&A to active workflows: automatically summarizing new uploads, flagging content that contradicts indexed material, generating documentation from recorded discussions, and routing queries to the appropriate knowledge source.

Personalized retrieval. Systems will adapt retrieval to the querying user’s role, expertise level, and query history – returning content that is appropriate to that user’s context rather than returning the same segments for every user who asks the same question.

Voice-first interfaces. Spoken queries processed against video transcript libraries will enable hands-free workplace knowledge retrieval – particularly useful in field, manufacturing, and healthcare environments.

Teams building Vimeo AI assistants now are establishing the foundational infrastructure to absorb these capabilities as they mature.

FAQ Section

What is a Custom GPT for Vimeo content?

A Custom GPT for Vimeo content is an AI assistant trained on the spoken content of a Vimeo video library. It answers user questions in natural language by retrieving relevant transcript segments from indexed videos and generating grounded responses with timestamp citations. Unlike general-purpose AI chatbots, it has no access to information outside your specific video content.

Can I create a GPT from Vimeo videos?

Yes, but not directly through OpenAI’s GPT Builder at meaningful scale. ChatGPT’s custom GPT feature does not connect to private Vimeo libraries, cannot generate timestamp citations, and is not practical for large video collections. A dedicated platform with native Vimeo integration – such as CustomGPT.ai – or a custom-built RAG pipeline is required for a functional, scalable Vimeo AI assistant.

Can ChatGPT search Vimeo videos?

Standard ChatGPT cannot access private Vimeo libraries or retrieve content from your specific videos. It responds from general training data, which does not include your video content. Accurate, grounded answers about your specific Vimeo content require a dedicated RAG system with a Vimeo integration.

How do AI assistants understand video content?

AI assistants understand video content through transcripts – the text representation of spoken audio generated by automatic speech recognition (ASR). The transcript is chunked into segments, converted to vector embeddings that capture semantic meaning, and stored in a vector database. When a user asks a question, the system retrieves the most semantically relevant chunks and generates an answer from them.

Do I need transcripts for Vimeo AI search?

Yes. Transcripts are the required bridge between video content and AI retrieval systems. AI models process text, not audio or video files directly. Without transcripts, the spoken content of Vimeo videos is inaccessible to any retrieval system. Transcript quality is the most important variable affecting overall system quality.

What is RAG for Vimeo content?

RAG (Retrieval-Augmented Generation) for Vimeo content is an AI architecture that retrieves relevant transcript segments from indexed Vimeo videos before generating an answer. This grounds the AI response in your actual content rather than general LLM training data, preventing hallucination and enabling source citations. It is the standard architecture for any reliable Vimeo AI assistant.

How do timestamp citations work?

When transcript chunks are indexed, each is stored with metadata including the video ID and the start and end timestamp of that segment in the video. When a chunk is retrieved to answer a question, this metadata is included in the response, enabling the system to generate a citation – for example, “Product Demo – 00:04:22” – that links the user directly to that moment in the video.

Can AI summarize Vimeo videos?

Yes. A Vimeo AI assistant can generate summaries of individual videos, topic-level summaries across multiple videos, or responses to questions that synthesize content from throughout a library. Summary quality depends on transcript accuracy and the quality of the underlying language model.

What is the best no-code way to build a Vimeo GPT?

For teams without engineering resources, platforms with native Vimeo integration that handle the full pipeline automatically are the practical option. CustomGPT.ai is one platform worth evaluating for this use case, as it connects directly to Vimeo, handles transcript extraction and indexing, provides RAG-based answers with timestamp citations, and deploys via embed widget or API without requiring code.

Can businesses use GPTs for video training libraries?

Yes. Organizations use Vimeo AI assistants for employee onboarding, compliance training, product training, and internal knowledge management. Employees query the AI to retrieve specific information from training videos rather than watching full recordings. This reduces time-to-competency and makes training content retrievable on demand rather than requiring sequential viewing.

Is a Vimeo GPT secure for enterprise use?

A Vimeo AI assistant can be enterprise-secure when deployed on a platform with appropriate controls – data isolation, role-based access controls, encryption at rest and in transit, audit logging, and compliance certifications. Security posture varies significantly by vendor. Review SOC 2 attestation, data processing agreements, and data residency options before deploying over sensitive content.

How long does it take to build a Vimeo AI assistant?

With a no-code platform, a basic deployment typically takes hours to a day. A production deployment including testing, integration, and configuration usually takes 2-5 days. A custom-built RAG pipeline requires 4-8 weeks of engineering work for an initial system, with ongoing investment for maintenance.

Can a Custom GPT answer questions across multiple Vimeo videos?

Yes. This cross-video synthesis capability is one of the core advantages of RAG-based systems. A single question can retrieve relevant chunks from dozens of videos simultaneously, enabling the AI to synthesize an answer that spans your entire library – something no individual video search can achieve.

What is semantic search for Vimeo content?

Semantic search for Vimeo content retrieves transcript segments based on meaning rather than keyword matching. A query like “how does the approval process work?” retrieves segments discussing “review workflows,” “sign-off procedures,” and “authorization steps” – because these are semantically related even if the exact words differ. This is enabled by vector embeddings and nearest-neighbor search in a vector database.

What tools are needed to build a custom GPT for Vimeo?

A complete custom pipeline requires: the Vimeo API (video extraction), an ASR service such as OpenAI Whisper, AssemblyAI, or Deepgram (transcription), a chunking strategy and embedding model (text vectorization), a vector database such as Pinecone, Weaviate, or Qdrant (storage and retrieval), an orchestration framework such as LangChain or LlamaIndex (pipeline management), a language model such as GPT-4o or Claude (answer generation), and a chat interface for user interaction. No-code platforms replace all of these components with a single integrated service.

For teams evaluating no-code ways to create a Custom GPT-style assistant for Vimeo content, CustomGPT.ai’s Vimeo integration is one option worth exploring for transcript indexing, semantic retrieval, and conversational AI deployment.