AI Assistant Configuration Guide
Table of Contents
-
- 1.1 Assistant Details
- 1.1.1 Language Selection
- 1.1.2 Voice Selection
- 1.2 Model Selection
- 1.3 Tools Integration
- 1.4 Choose Pathways
- 1.1 Assistant Details
Configuring Your Assistant
Once you've created a new AI Assistant, the next step is configuring its core settings to personalize its voice, behavior, and functionality. This section walks you through each configuration option so you can build an assistant tailored to your needs.
[Configuration Interface Screenshot - Replace with actual image]
Assistant Details
Language Selection
Select the primary language your assistant will speak and understand. This setting ensures the assistant communicates in the correct language with your callers.
- Choose from the available languages in the dropdown menu.
- If you plan to support multiple languages, this can be managed later in Advanced Settings → Multi-Lingual Capabilities.
Example: Select "English (US)" if your audience is primarily English-speaking customers in the U.S.
Voice Selection
Choose the voice style that your assistant will use to interact with callers. The voice affects tone, clarity, and overall user experience.
- Available voice options vary depending on the selected language.
- Preview the voice before selecting to ensure it aligns with your brand's personality (e.g., friendly, formal, conversational).
Recommendation: Opt for a friendly, clear voice for customer support bots, or a professional tone for business calls.
Model Selection
The model selection determines the underlying AI model that powers your assistant's speech generation and understanding. You can choose based on performance needs and budget.
Voicing Options
Voicing AI offers three powerful LLM model variants—VoiceLLM-Lite, VoiceLLM-Base, and VoiceLLM-Large—each designed to meet different levels of conversational complexity, response quality, and task performance. Below is a quick guide to help you choose the right model for your use case:
VoiceLLM-Lite (3B Parameters)
This model is optimized for lightweight tasks and basic conversations. It delivers high-quality voice output while being resource-efficient. Ideal for internal tools, short interactions, or low-complexity call flows.
- Conversation Quality: High
- Best For: Lightweight customer touchpoints, basic FAQs, or background processes
VoiceLLM-Base (8B Parameters)
A balanced choice offering very high-quality speech and improved understanding. It performs well in real-time scenarios and supports dynamic task execution with fast responses.
- Conversation Quality: Very High
- Best For: Mid-level customer support, appointment reminders, or moderate-level workflows requiring speed and accuracy
- Note: Medium to high accuracy for Agentic capabilities. Performance goes down with number of tool integrations
VoiceLLM-Large (70B Parameters)
Our most advanced model, designed for natural, expressive conversations in high-stakes environments. It excels at understanding nuanced inputs, delivering human-like responses, and performing accurate function calls in complex conversations.
- Conversation Quality: Excellent
- Best For: Enterprise-grade support, emotional conversations, escalations, and any interaction where quality and accuracy are paramount
- Note: High accuracy for Agentic capabilities
Gemini Options
If using the Gemini voice engine, select from:
- Gemini Basic: [Configuration details to be added]
- Gemini Premium: [Configuration details to be added]
Tools Integration
Under "Tools Integration," you can specify external tools or services your assistant will connect with to enhance functionality. These might include:
- CRM systems (like Genesys, Telephony, etc)
- Knowledge bases
- Custom APIs
Purpose: Tools allow your assistant to pull real-time data or trigger actions (e.g., fetching account details, creating a ticket). Simply select the tools from the dropdown to link them with your assistant.
Example Use Case: Enable your assistant to check order status by connecting it with your order management system.
Choose Pathways
The "Choose Pathways" option lets you define a pre-built conversation pathway your assistant will follow. A pathway acts like a predefined script or flow to handle common tasks.
- Selecting a pathway will automatically populate the assistant's prompt, first sentence, knowledge base, and model settings to fit that workflow.
- Important: Choosing a pathway resets previously configured prompts and models.
Use Case Example: If you're setting up a payment collection bot, selecting the "Payment Pathway" will configure the assistant with relevant prompts and workflows for payment-related interactions.
Advanced Settings
Interruption Settings
- Start Time: Configure when interruptions are allowed
- End Time: Set the cutoff time for interruptions
Expressiveness Adjustment
Adjust how expressive and animated your assistant's voice should be during conversations.
Voice Speed Control
Control the speaking pace of your assistant to match your audience's preferences.
Background Sound Selection
Choose appropriate background sounds for your assistant's environment.
Filler Injection
Configure natural speech fillers (like "um," "uh") to make conversations sound more human-like.
Back-channeling
Enable responsive listening cues that show the assistant is actively engaged.
Enable Multi-Lingual Capabilities
Configure support for multiple languages within a single assistant.
Call Transfer Logic
The Call Transfer Logic settings let you decide when and how your AI Assistant should transfer a call to a human agent or another phone number. This is useful when the assistant needs help from a person or has to escalate the call.
Trigger-Based Transfer
This option tells the assistant to transfer the call based on specific triggers or keywords. For example, if the caller says "I want to speak to an agent," the assistant will detect this phrase and automatically transfer the call.
[Trigger-Based Transfer Interface Screenshot - Replace with actual image]
Transfer Message Configuration: Below the Call Transfer Logic, there's a text box where you can enter a message the assistant will say before transferring the call.
Transfer Number Input: Under Transfer Number, you can enter the phone number where the call will be transferred. The number should include the country code (example: +1 for USA, +91 for India).
Example Scenario
Here's how it works together:
- You select Trigger-Based Transfer
- You write the message: "Let me connect you to our support team."
- You enter the transfer number: +919876543210
When a caller says "I need to talk to a person," the assistant will reply, "Let me connect you to our support team," and then automatically transfer the call to +919876543210.
Emotion-Based Configuration
The Call Transfer Logic allows you to decide when your AI Assistant should transfer a call to a human agent, based on the emotions it detects from the caller.
When you choose Emotion-Based from the dropdown, you can create rules (called conditions) that check for emotions like awkwardness, anxiety, annoyance, or disapproval. If the caller's emotions meet the conditions you set, the call will be transferred automatically.
[Emotion-Based Transfer Interface Screenshot - Replace with actual image]
How to Set Conditions
Each condition includes:
- An emotion you want to monitor (for example: Awkwardness, Anxiety)
- A comparison (like greater than, less than)
- A value (starting from 0)
This tells the assistant what emotion to watch for, and when to trigger the transfer.
Adding Multiple Conditions
You can add more conditions by clicking the + Condition button. Each additional condition can check for a different emotion. You can link conditions with AND or OR:
AND Logic
All conditions must be true for the transfer to happen.
Example with AND: If Awkwardness > 0 AND Adoration > 0, the call will only transfer if the caller feels both awkward and adoring at the same time.
OR Logic
Only one condition needs to be true for the transfer.
Example with OR: If Awkwardness > 0 OR Anxiety > 0, the call will transfer if either awkwardness or anxiety is detected.
Transfer Number Input: Under Transfer Number, you can enter the phone number where the call will be transferred. The number should include the country code (example: +1 for USA, +91 for India).
Pronunciation Guide
Adding Custom Pronunciations
Configure how your assistant pronounces specific words, names, or technical terms.
Managing Pronunciations
Organize and maintain your custom pronunciation dictionary.
Prompt Configuration
After configuring the core details of your AI Assistant (such as its name, voice, language, and model), the next step is to define the assistant's Prompt—this is where you give your AI agent clear instructions on how it should behave during a call.
[Prompt Configuration Interface Screenshot - Replace with actual image]
What Is a Prompt?
A Prompt is the brain behind your AI assistant. It defines how the assistant should talk, what it should say, how it should respond to callers, and what kind of tone it should use. Writing a good prompt is crucial to making your assistant sound intelligent, helpful, and aligned with your business needs.
This is where you tell the assistant what its job is. You can instruct it to ask certain questions, respond to specific inputs, take actions based on user responses, and speak in a specific tone or manner.
How to Add a Prompt
- Navigate to the Prompt tab at the top of the assistant configuration screen.
- In the large text box labeled Prompt, type out your full instructions. This is where you define what the assistant should do on a call.
You can write in natural language. For example:
"You are a polite and helpful assistant for an insurance company. Greet the caller warmly, ask them for their policy number, and offer help with renewals or claims. If they ask about premium amounts, guide them to visit our website. If they say 'renew', ask for their birth date and policy number."
- If your assistant uses any dynamic information (like the caller's name or account number), you can use variables by clicking on the add variables link. These variables pull data from your contact list or input values.
Prompt Assistance Tools
To help you write high-quality prompts, Voicing AI offers several built-in tools:
Ask Co-pilot
If you're not sure how to write a prompt or where to begin, the Ask Copilot feature is designed to guide you. This tool uses AI to help generate a base prompt for your assistant based on a simple description of your use case.
How to use Ask Copilot:
- Click on the Ask Copilot button available in the Prompt section
- A small input field will appear where you can briefly describe what you want the assistant to do
- For example, you can type: "I want the assistant to greet customers, collect their account number, and help them check their loan status."
- Once submitted, Copilot will analyze your request and auto-generate a suggested prompt in the textbox
The generated prompt includes structured instructions for your assistant, covering tone, actions, and conversation flow. The output is fully editable—you can review the generated text, modify it to suit your business context, and add any custom logic or personalization variables as needed.
This feature is especially helpful for users who are new to prompt writing or want to accelerate the assistant creation process without starting from scratch.
Use a Template
Coming Soon! Browse or select from predefined templates for common use cases like collections, appointment booking, product inquiries, etc. These give you a great starting point.
Prompting Guidelines
Coming Soon! Click this button to read best practices for prompt writing. It explains tone, format, and structure so your assistant performs well.
These tools are especially useful for users who are new to writing prompts or want help shaping the conversation flow.
Test Agent
Before launching your AI assistant in a live outbound campaign, it's essential to test how it performs in a real interaction. Voicing AI offers two powerful testing modes—Desktop and Mobile—that allow you to validate your assistant's prompt, voice behavior, and response logic.
This ensures your assistant speaks naturally, handles inputs as expected, and delivers a smooth conversational experience.
Overview of Testing Options
Once you've written and saved your prompt, you'll find the "Test your assistant" panel on the right-hand side of the screen. This panel gives you access to two testing modes:
- Desktop Mode: Simulate the conversation in-browser
- Mobile Mode: Receive a real call on your phone from the AI agent
Both methods let you test your assistant's performance using the current configuration and prompt in real time.
Desktop Testing Mode
Desktop mode is a browser-based simulation of the call. This method does not involve a real phone line but mimics the conversational experience of a live call.
[Desktop Testing Interface Screenshot - Replace with actual image]
In this mode, you'll hear how the assistant greets the user, handles scripted steps, and responds to different user inputs—right from your browser.
When to use Desktop Mode: Use Desktop Mode for quick reviews, fine-tuning dialogue, verifying the structure of your prompt, and checking how it handles various paths in the flow without initiating a real call.
Note: Desktop testing may not capture the exact latency, audio quality, or real-world pauses of a phone call.
Mobile Testing Mode
Mobile mode allows you to experience the assistant's behavior exactly as your customers will—through a real phone call. This is the most accurate way to validate tone, timing, clarity, and edge-case handling.
[Mobile Testing Interface Screenshot - Replace with actual image]
To test your assistant via Mobile:
- Click the Mobile tab in the test panel
- Enter your phone number in the provided field. Make sure to include the correct country code (e.g., +91 for India, +1 for the United States)
- Click "Call me" to initiate the test
You'll receive a call from the AI assistant using the prompt and configuration you've saved.
Entering Your Number for Call Testing
When entering your number in the Mobile tab, ensure you use the correct format and that the number is reachable. This helps avoid test failures or delayed calls.
The platform supports international testing, so whether you're in the U.S., India, or elsewhere, you can simulate a real call experience from your target region.
Verifying Call Behavior
During the test call, pay attention to the following:
- Greeting: Did the assistant say the opening line as intended?
- Tone and voice: Does it sound natural and aligned with your brand?
- Understanding: Does it respond appropriately when you speak?
- Flow navigation: Does it follow the correct steps based on your input?
- Fallback handling: How does it react when it doesn't understand or when you say something unexpected?
- Closing: Does it end the call politely or transition properly to a human (if configured)?
You can repeat the test as many times as needed by editing your prompt and clicking "Call me" again.
Getting Started
Ready to configure your AI assistant? Follow these steps:
- Start with Assistant Details - Choose your language and voice
- Select Your Model - Pick the right LLM for your use case
- Configure Tools - Connect external systems if needed
- Write Your Prompt - Define your assistant's behavior
- Test Thoroughly - Use both Desktop and Mobile testing
- Refine and Launch - Make adjustments based on testing results
For additional support or advanced configuration options, please refer to our comprehensive documentation or contact our support team.
Last updated: [04/12/2025]
Version: 1.0