This page answers the most common questions about OCP miniApps®, from what they are and why to use them, to configuration details, error handling, and troubleshooting. For detailed documentation, refer to the linked articles throughout.
General Questions
What are OCP miniApps®?
OCP miniApps® are pre-packaged, natural language dialog components built on the Omilia Cloud Platform® (OCP). Each miniApp is designed to capture a specific type of information or intent from a caller - such as a date, a numeric code, or a routing intent - using state-of-the-art speech recognition (ASR) and natural language understanding (NLU). They encapsulate over 10 years of Omilia's experience in voice automation into ready-to-configure building blocks, requiring no coding or speech engineering expertise.
Who are OCP miniApps® designed for?
miniApps® are designed for anyone building conversational voice interfaces, regardless of technical background. You do not need to be a Speech Recognition or NLU engineer. A simple wizard-based UI lets you configure and deploy sophisticated conversational experiences in minutes, not months.
Why should I use OCP miniApps® instead of building a traditional IVR?
Traditional IVR systems force callers to navigate rigid menu trees or enter data in specific formats, which leads to frustration and high abandonment rates. OCP miniApps® replace this with a natural, conversational approach. Callers can state what they want in their own words, in any format - for example, saying a date as "the first of May" or "05-01" - and the system understands them. Key advantages include:
-
No menu navigation required - callers can lead with intent upfront.
-
Context-tuned speech recognition for dramatically higher accuracy.
-
Built-in disambiguation, error handling, and confirmation flows.
-
Significantly faster deployment than custom-built solutions.
-
Out-of-the-box best practices from thousands of real-world deployments.
Do I need coding experience to use OCP miniApps®?
No. Configuration is done through a wizard-based user interface with drag-and-drop menus. That said, advanced users can optionally write JavaScript functions for custom validations, data transformations, and dynamic logic within miniApps where needed.
How long does it take to deploy a miniApp?
A miniApp can be configured and ready for use in a matter of minutes. Once added to a CCaaS or Orchestrator flow, it operates as a self-contained dialog component with no additional training or setup required.
Types of miniApps
What types of miniApps are available?
The following miniApp types are currently available:
-
Alphanumeric - captures mixed letter and digit strings (e.g. account numbers, postal codes).
-
Alpha - captures letter-only strings.
-
Numeric - captures number-only strings (e.g. PINs, credit card numbers).
-
Amount - captures monetary or quantitative values with unit understanding.
-
Date - captures dates in flexible spoken formats and returns them in a configured format.
-
Date Range - captures a start date and an end date.
-
Text - captures free-form text input (chat-only).
-
Intent - understands and routes caller intent across domains (Banking, Telco, Energy, Car Retail, Universal).
-
YesNo - captures a yes/no response from the caller.
-
Announcement - plays a dynamic announcement to the caller with no capture needed.
-
Entity - captures specific entities (e.g. account type, vehicle make) from a custom or out-of-the-box NLU.
-
Ask - similar to Entity but uses next-generation NLU models.
-
Intelli - processes and transforms data without calling an external endpoint.
-
Web Service - makes API calls (GET, POST, PUT, DELETE) and returns processed data.
-
Corpus Collection - collects utterances from callers to build training data for custom NLU models.
-
Flow - a reusable building block that groups multiple miniApps, conditions, and transfers into a logical sequence. (Configured with Orchestrator)
What is the difference between Intent and Entity miniApps?
An Intent miniApp understands why a caller is calling and routes them accordingly (e.g. "I want to make a payment", "I lost my card"). An Entity miniApp captures specific pieces of information from a caller's response (e.g. account type = "personal", vehicle make = "Toyota"). They are often used together: an Intent miniApp to route, and Entity miniApps to collect the data needed to fulfil the request.
What is the difference between the Intelli and Web Service miniApps?
Both allow you to process data and use JavaScript functions. The key difference is that Web Service can call an external API endpoint (making GET, POST, PUT, or DELETE requests), while Intelli is a simplified version that processes data without hitting an external endpoint - useful for local data transformation and computation.
Can I use OCP miniApps® with chat as well as voice?
Most miniApps are designed for voice. The Text miniApp is the exception - it is only compatible with chat applications and uses a generic NLU model to interpret free-form written input.
Configuration
What tabs are available when configuring a miniApp?
The configuration UI is organized into tabs. Depending on the miniApp type, the following tabs are typically available:
-
General tab - the main information tab for the miniApp's type, version and other core information.
-
Prompts tab - the main configuration of all prompts, welcome announcements, greetings and initial discussion prompts.
-
Validations - define rules to accept or reject caller input (length, pattern, list, JavaScript function, or pre-built validations).
-
Agent Handling - configure behavior when the caller asks for a human agent.
-
Properties - configure error thresholds, barge-in, DTMF, and other behavioral settings.
-
DTMF - configure touch-tone fallback and key mappings.
-
Chat - configure chat-specific behavior.
-
Manage Languages - manage multilingual prompt configurations.
-
User Functions - define JavaScript functions for custom data processing or logic.
How do I configure what the miniApp says to the caller?
All prompts are fully customizable. Each miniApp supports multiple prompt types including:
-
Initial - the opening question asked to the caller.
-
Reask - rephrasing prompts used when retrying after an error.
-
Confirmation - prompts that read back what was captured and ask for confirmation.
-
Rejection - prompts played when a caller disconfirms.
-
Error / No Input / No Match - prompts for specific error conditions.
-
Agent Request - prompts played when a caller asks for a human.
-
Hold - prompts for the hold/wait functionality.
Prompts can be pre-recorded audio files, TTS-generated audio, or real-time TTS streaming.
Can I use dynamic values in prompts and announcements?
Yes. Dynamic values allow you to inject runtime data - such as captured values from previous miniApps, variables from the Orchestrator flow, or computed values from JavaScript functions - directly into prompts and announcements. This enables personalized and context-aware messages.
How do I validate caller input?
The Validations tab supports several validation methods depending on the miniApp type:
-
Length or date range - the input must fall within a specific numeric or date range.
-
Pattern (Regex) - the input must match a regular expression.
-
In list - the input must match an entry in a pre-configured list.
-
Custom JavaScript function - input is validated against a user-defined JS function.
-
Pre-built validations - off-the-shelf rules for common cases like credit card numbers and US/Canadian ZIP codes.
-
Amount Unit: Set the unit that the amount is measured (USD, EUR, CAD, AUD, UAH, %, or No unit).
-
Amount Type: Set the amount type (monetary, percentage,). This field is disabled if No unit is selected as Amount Unit.
-
Date Range: Choose the date range of the expected caller reply by selecting a specific date, the time of he call, or in general a point in the past or in the future.
Multiple validation rulesets can be active at the same time (e.g. accepting either a 4-digit PIN or a 16-digit credit card number).
Please see Validations Tab page for more details.
Can I use JavaScript in miniApps?
Yes. JavaScript functions can be defined in the User Functions tab and used for:
-
Custom input validation.
-
Data processing and transformation (e.g. formatting a date, computing a value).
-
Building dynamic prompt content.
-
Post-processing data returned by the Web Service or Intelli miniApp.
Speech & Recognition
How does OCP handle callers who say unexpected things?
OCP miniApps® are designed to handle this gracefully. Each miniApp type is context-tuned: the system focuses recognition on input relevant to the type of data being captured. When a response is unclear, unexpected, or outside the expected range, the miniApp uses built-in disambiguation flows to ask clarifying questions before escalating to an error state.
Can callers interrupt the system while it's speaking (barge-in)?
Yes. Barge-in can be enabled per miniApp invocation in the Properties tab. When enabled, callers can start speaking while a prompt is being played and the system will stop playback and start processing the input. This is especially useful for frequent callers who are already familiar with the prompts. It can also be disabled for prompts that the caller must hear in full.
What happens when a caller uses DTMF (keypad) instead of speaking?
OCP miniApps® support DTMF inputs. The DTMF tab allows you to map key inputs with specific features.
This ensures a smooth experience for callers migrating from legacy DTMF systems or those who prefer not to speak.
How does the system handle accents, hesitations, or filler words like "umm"?
OCP miniApps® are built on deepASR, Omilia's speech recognition engine, which is trained to handle a wide range of accents, speaking styles, filler words, and verbal mannerisms. The system is designed to capture what the caller means, not just what they literally say.
Confirmation & Error Handling
When does the system ask a caller to confirm their input?
This is configurable. There are two confirmation modes:
-
Standard Confirmation - the system always asks the caller to confirm their input before proceeding.
-
Dynamic Confirmation - the system uses a built-in confidence score. If the score meets or exceeds a configured threshold (e.g. 90), the system skips confirmation and proceeds directly. If below the threshold, it asks for confirmation.
Both modes support fully customizable confirmation and rejection prompts.
Can a caller confirm or reject using the keypad?
Yes. DTMF Confirmation lets callers use key presses instead of voice during the confirmation step. For example, pressing 1 to confirm and 2 to reject. The key assignments are fully configurable.
What error counters does the system track?
OCP miniApps® track a comprehensive set of error counters:
-
NoInputs / ContinuousNoInputs - no speech was detected.
-
NoMatches / ContinuousNoMatches - speech was detected but could not be interpreted.
-
SameStateEvents / ContinuousSameStateEvents - input was interpreted but didn't advance the dialog.
-
LowConfRejections / ContinuousLowConfRejections - input was detected but rejected due to low confidence.
-
RepeatRequests / ContinuousRepeatRequests - the caller asked for the prompt to be replayed.
-
AgentRequests / ContinuousAgentRequests - the caller asked for a human agent.
-
Disconfirmations - the caller rejected a confirmation.
-
GlobalErrors / ContinuousErrors - aggregate counters across all error types.
-
WrongInputs - input failed validation.
All thresholds are configurable in the Properties tab.
What happens when error thresholds are exceeded?
When an error threshold is exceeded, the miniApp exits with a FailExitReason value and the flow transfers to whatever is configured in the error handling step (e.g. transfer to agent, hang up, play an error message and retry from another step).
Agent Handling
What happens when a caller asks to speak to a human agent?
This behavior is fully configurable in the Agent Handling tab. You can configure the miniApp to:
-
Transfer immediately to an agent upon any agent request.
-
Try to continue the conversation first (playing an "I may be able to help" prompt) and only transfer if the caller insists or exceeds the configured threshold.
-
Play customized agent request reaction prompts before handling the request.
Are there separate thresholds for different agent request scenarios?
Yes. The system tracks:
-
MaxAgentRequests- global agent request threshold. -
MaxContinuousAgentRequests- consecutive agent request threshold. -
MaxAmbiguousAgentRequests- agent request threshold during an intent disambiguation step. -
MaxSuggestionAgentRequests- agent request threshold during a suggestion or intent confirmation step.
If multiple thresholds are reached simultaneously, the one with the highest priority takes effect (Ambiguous > Suggestion > General).
Hold Functionality
Can a caller ask for more time to find information?
Yes. The Hold functionality allows callers to say something like "wait a moment" or "hold on" while they look for information. The system enters hold mode for a configurable duration, can play hold music, and offers to extend the wait if the caller asks for more time while this can be interrupted anytime using specified DTMF value. This functionality can be enabled or disabled per miniApp invocation and all prompts are customizable.
Is there a limit to how many times a caller can request a hold?
Yes. The maximum number of hold requests is capped at 5 (hardcoded). If this is exceeded, the miniApp exits with MaxHoldRequests as the FailExitReason.
What happens if a caller says they don't have the information available?
If the Hold functionality is disabled, the miniApp exits with the FailExitReason value InfoAsked_NotAvailable. If the Hold functionality is enabled, the miniApp offers the caller additional time to retrieve the information before deciding to exit.
Responses & Troubleshooting
What does FailExitReason tell me?
FailExitReason is the value returned when a miniApp ends without successfully capturing the target data. It tells you exactly why the miniApp exited, enabling you to handle different failure scenarios differently in your flow. Common values include:
|
|
Meaning |
|---|---|
|
|
Caller didn't speak after repeated prompts |
|
|
Caller's speech could not be interpreted |
|
|
Caller repeatedly asked for a human agent |
|
|
Caller rejected the captured data too many times |
|
|
Overall error threshold exceeded |
|
|
Caller pressed the configured escape key |
|
|
Caller indicated they don't have the requested information |
|
|
Caller asked to start the dialog over |
|
|
A JavaScript user function threw an error |
|
|
A system-level error occurred |
What should I do if I see a jsExecutionError?
This means one of your JavaScript user functions or validation functions threw a runtime error. Check your User Functions tab for syntax errors, missing variables, or logic issues. Test the function logic independently before deploying.
What should I do if I see a getDataModelError?
This is an internal configuration error. Please contact Omilia Customer Support and provide the session logs and the miniApp configuration so the team can investigate.
What should I do if a CriticalError occurs frequently?
These are infrastructure-level errors. Contact Omilia Customer Support immediately and provide details about the frequency, times, and affected miniApps. These are not expected in normal operation and require platform-level investigation.
Further Reading
For more detailed documentation on each topic, see the following pages: