SpeechTEK.com home
Monday Tuesday Wednesday
Sunrise Discussions Customer Case Studies SpeechTEK Labs
Keynotes Register Now SpeechTEK University
SpeechTEK 2011 - Tuesday, August 9, 2011
SUNRISE DISCUSSIONS
SD201 – Best Practices for Voice Biometrics
8:00 a.m - 8:45 a.m
Valene Skerpac, Director - iBICS (iBiometrics, Inc.)

This interactive discussion focuses on today’s voice biometrics use cases. The group explores the benefits and best practices of various usage scenarios. This includes frequently asked questions surrounding established deployment and usage of voice biometrics within an organization.

SD202 – Opportunities for Speech in the Developing World
8:00 a.m - 8:45 a.m
Deborah Dahl, Principal - Conversational Technologies

Mobile phones are far more pervasive in the developing world than traditiona computers, making voice-based services a natural choice for applications in these countries. However, creation of locally relevant voice applications is impeded in many places by a lack of knowledge and technology. Life-critica information and services are in limited supply, especially for those who need help the most. What should be done to bring voice technology to developing nations and assist them in using the technology wisely?

SD203 – What Speech Designers Can Teach Multimodal Designers
8:00 a.m - 8:45 a.m
Phillip Hunter, Head of User Experience - Amazon Alexa Skills Kit

The creation of multimodal user interfaces, blending GUI and VUI, presents unique challenges. Graphical and voice interaction designers have a wide experience in crafting user interfaces in their respective areas, but relatively little work has been done combining the two. What guidelines and heuristics from VUI design carry over to designing multimodal user interfaces? And what VUI guidelines don’t transfer well to multimodal user interfaces? What GUI and multimodal user interface (MUI) guidelines should voice designers be aware of? Join this discussion about new frontiers in the design of devices and applications.

SD204 – Setting Positive Expectations for Speech Self-Service
8:00 a.m - 8:45 a.m
Ahmed Bouzid, Co-founder & President - The Ubiquitous Voice Society

When explaining to people that you design user interfaces for voice applications for phones, they usually respond with, “Oh, I hate them” and list a series of complaints. How would you respond to these comments? What can be done to help people have a more positive expectation for speech self-service?

TUESDAY KEYNOTE PANEL
Mobility — A Game-Changer for Speech?
9:00 a.m - 10:00 a.m
MODERATOR: Daniel Hong, Research Director & VP - Forrester Research
Bruce Pollock, Vice President, Strategic Growth and Planning - West Interactive
Mike Phillips, Chief Technology Officer - Vlingo
Mazin Gilbert, AVP of Technical Research - AT&T
Vlad Sejnoha, Chief Technology Officer - Nuance

New mobile devices are dramatically changing how customers interact with businesses. This panel of industry experts describes what new applications will be supported on mobile devices, discusses how speech technologies will be used by these applications, and describes how voice user interfaces will be integrated with graphical user interfaces. Will users embrace voice as they have embraced keypads on mobile devices? Are speech recognition and natural language processing able to process user speech into mobile devices? Will speech-enabled mobile applications replace IVR applications? Learn the answers to these questions and more during this keynote panel.

Break in the Exhibit Hall
10:00 a.m - 10:45 a.m
TRACK A: BUSINESS STRATEGIES
A201 – The Role of Speech in Smartphones
10:45 a.m - 11:30 a.m
MODERATOR: Jason P. Hersh, Customer Experience Strategist - RightNow Technologies

A smartphone is a personal device that can always be with someone. Smartphones and related mobile devices are evolving into a form of personal assistant. It will be natural to converse with this assistant by voice, particularly considering the limitations of typing on the small devices. This talk notes the key features of such a voice-enabled assistant, the hurdles in achieving the vision fully, and possible approaches to overcoming those hurdles.

 

The Mobile Phone as Personal Assistant and the Role of Speech Technology
William Meisel, President - TMA Associates

A smartphone is a personal device that can always be with someone. Smartphones and related mobile devices are evolving into a form of personal assistant. It will be natural to converse with this assistant by voice, particularly considering the limitations of typing on the small devices. This talk notes the key features of such a voice-enabled assistant, the hurdles in achieving the vision fully, and possible approaches to overcoming those hurdles.

Speech in a World Full of Smartphones
David Attwater, Senior Scientist - Enterprise Integration Group

In a world rapidly filling up with smartphones that have information-rich user interfaces, what role does speech play? In the future, will access to customer service be mediated through mobile apps and will this make the traditional speech and IVR redundant? Will speech simply be a keyboard replacement or should we expect novel new paradigms? This talk explores these issues.

A202 – Integrating Social Media with Customer Service
11:45 a.m - 12:30 p.m
MODERATOR: Michael Smith, Self Service Solution Opportunity Architect, Professional Services - Avaya
Integrating SM With the Contact Center Experience

Social media has become a critical channel for organizations. However, organizations need strategies and technologies to effectively scale and integrate social media for a holistic approach to customer service. Organizations can now have meaningful interactions through social media channels in the same agent desktop where agents interact with callers. This presentation provides insight into combining two distinct functions to bring social monitoring and social interaction into the overall contact center and customer service strategies.

Taking Your Social Media Strategy to the Next Level

The potential for enabling immediate customer engagement is attracting companies to social media sites, but most organizations fail to harness the full value of these connections in the contact center. This talk shows how to apply business rules to social interactions to enhance agent efficiency, integrate social media into existing processes, and develop key metrics. How to cut through the social noise, identify which posts are most critical to your company, and present best practices to align social media tools with the contact center are discussed.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
A203 – Natural Language: False Hopes or True Benefits?
1:45 p.m - 2:45 p.m
MODERATOR: Daniel Hong, Research Director & VP - Forrester Research
Greg Bielby - VoltDelta
Andy Middleton, Technical Architect - Performance Technology Partners (PTP)

The term natural language has been used to market various speech technologies that have not always delivered the benefits promised. Panelists in this session discuss what natural language really means, the best uses of the technology, and the potential business benefits as well as issues that may prevent an organization from achieving these benefits. This session can help you decide if natural language is right for your situation.

A204 – Upgrading Your IVR
3:00 p.m - 3:45 p.m
MODERATOR: Mark W Stallings, Managing Partner, VUI Design Practice - Forty 7 Ronin Inc.
Like for Like: Stop Moving to the Past
Michael Smith, Self Service Solution Opportunity Architect, Professional Services - Avaya

When upgrading IVR systems, many organizations request “like for like” replacement of the functionality they first implemented many years ago without considering whether it is still appropriate or effective for today’s call center.  This talk makes the case for spending time before a replacement project to validate the functionality, technology, and customer experience to ensure the long-term success of the project.

What You Thought You Knew About Your IVR but Didn't
Greg Johnston, Senior IT Manager - IVR, Information Technologies - DISH Network

Replacing an IVR can seem trivial until you're immersed in the complexities of the project. The migration of an IVR to a new platform introduces challenges that your organization may not have faced in some time. Hear about a large IVR migration currently underway and some of the measures taken to ensure its success.

Break in the Exhibit Hall
3:45 p.m - 4:30 p.m
A205 – What’s Next for Speech? Analysts’ Perspectives
4:30 p.m - 5:30 p.m
MODERATOR: William Meisel, President - TMA Associates
Mapping Customer Preferences and Frustrations to Speech Self-Service Applications
Daniel Hong, Research Director & VP - Forrester Research

Customer frustrations are increasing in today’s challenging business climate. Under heavy pressure to improve customer satisfaction, enterprises need to rely on a combination of the right technologies, people, and processes to enhance the customer experience and increase the bottom line. Therefore,
enterprises need to develop a clearer line of sight with the customer and their needs when it comes to communication. This talk reports key findings from Ovum’s latest survey of 4,000 consumers on customer service preferences for client engagement.

Speech + Smartphones + The Cloud: The Virtual Personal Assistant Genome
Dan Miller, Lead Analyst-Founder - Opus Research, Inc.

Given advancements in speech recognition, transcription, translation, text-to-speech synthesis, and application processing power, the days of a speech based virtual personal assistant are closer than ever. Yet a number of hurdles remain as solutions providers must overcome gaps in some specific areas, including speaker identification, noise cancellation, speech extraction, language recognition, translation, and, ultimately, understanding. This talk explores these gaps and presents an exposition of the technologies that bridge them.

Sponsored By
Networking Reception
5:30 p.m - 7:30 p.m

During the reception you can visit the consultants’ lounge for one-on-one discussions over drinks. 

TRACK B: VOICE INTERACTION DESIGN
B201 – AVIxD PANEL—Speech in a Multichannel World
10:45 a.m - 11:30 a.m
MODERATOR: Peter B Krogh, Robot Experience Designer - Jibo

Today’s customers have multiple paths for interacting with organizations. Speech-enabled technologies can offer advantages in some circumstances, but how do you determine how to best implement speech among the many available channels? This panel presents the results of a workshop conducted by the Association for Voice Interaction Design (AVIxD) on the effective use of speech as one of many possible communication channels.

B202 – Design Challenges
11:45 a.m - 12:30 p.m
MODERATOR: Catherine Zhu, Principal Consultant - SpeechUsability
Who Knows What Your Callers Know?
Judi Halperin, Principal Consultant and Team Lead, Global Speech Engineering - Avaya Inc.

Most IVR systems require callers to provide certain pieces of information to authenticate before completing their tasks. When the information requested
is not readily available or familiar to the caller, this results in decreased caller satisfaction and unnecessary transfers. Some systems include “I don’t know” options to allow for such cases, but is this necessary or are callers just trying to circumvent the system? This session explores ways to optimize caller satisfaction and retention using strategies from real-world examples.

Accommodating Myriad User Populations Within Singular Speech Applications
Elizabeth A. Strand, Director of UX Strategy - Microsoft Tellme

Targeting designs to the needs, expectations, and contexts of specific user groups is key to maximizing performance and user satisfaction. However, business realities often require us to accommodate several disparate user groups within a single speech application. This talk addresses such design and performance challenges, and offers practical advice based on Tellme deployments spanning several verticals and application types.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
B203 – What VUI Standards Are Possible?
1:45 p.m - 2:45 p.m
MODERATOR: Susan L. Hura, Principal - SpeechUsability
Tara Kelly, President & CEO - SPLICE Software Inc.
Weiye Ma, Senior Speech Scientist - Telvent Worldwide
Jenni McKienzie, Voice Interaction Designer - SpeechUsability
Helen VanScoy, Director, User Interface Design - Performance Technology Partners

In many domains, designers rely on standards when creating user interfaces for new applications. In the field of voice user interface design, there are no widely accepted standards, although they have been discussed for years. Join us for a lively debate among experts as we discuss the potential benefits and disadvantages of creating voice interaction design standards and even if such standards are possible.

B204 – VUI Design Tools
3:00 p.m - 3:45 p.m
MODERATOR: Dave Pelland, Global Director, IVR Practice - Genesys
Object-Oriented VUI Design
Mark W Stallings, Managing Partner, VUI Design Practice - Forty 7 Ronin Inc.

Traditional methods for VUI design produce cumbersome documents that confuse the client and obfuscate incremental changes from the developer.
By applying proven concepts from object-oriented design methodologies and off-the-shelf software, VUI designs can become comprehensible to a wider
audience. Because object-oriented designs incorporate principles that the developers are familiar with, the gap between design and development can
be narrowed, leading to reduced confusion, increased delivery ability, and an overall improvement in the quality of the project.

Smart Design Tools and Rapid Application Deployment
Ed Elrod, Sr Consultant - Enterprise Integration Group

Hear how smart design tools helped Blue Cross & Blue Shield of Rhode Island speed up a complete overhaul of its customer service application from requirements to production. Smart design tools and a smart design accelerated reviews, prompt recording, usability testing, coding, unit testing, and change management.

Break in the Exhibit Hall
3:45 p.m - 4:30 p.m
B205 – Design Skills
4:30 p.m - 5:30 p.m
MODERATOR: Judi Halperin, Principal Consultant and Team Lead, Global Speech Engineering - Avaya Inc.
Sketching and Voice
Mary Constance Parks, Principal Experience Designer, Automation and Control Solutions - Honeywell

Sketching is increasingly being used in the interaction design community (and elsewhere) for generating and analyzing ideas, storyboarding, and creating low-fidelity prototypes. But sketching is not usually a technique that comes to mind when designing for voice. This talk shows how sketching can uncover potential problems early in the design and can help the application better fit people and the contexts they’re in. Simple techniques will be demonstrated for sketching and modeling voice interactions.

Beyond Mode-Based Design Skills. Or, When Was the Last Time You Met a Good Mouse Designer?
Phillip Hunter, Head of User Experience - Amazon Alexa Skills Kit

The use of speech in multimodal applications forces us to ask how to blend design disciplines of those who’ve done little work outside speech and IVR
and those who have worked exclusively in GUI design. This talk focuses on the need for that blend and encourages cross-mode skill training for designers and managers. The overarching goal is to enable designers to create highperformance multimodal applications that are optimized for business success and positive user experiences.

Sponsored By
Networking Reception
5:30 p.m - 7:30 p.m

During the reception you can visit the consultants’ lounge for one-on-one discussions over drinks. 

TRACK C: CUSTOMER EXPERIENCE
C201 – Measuring User Experience
10:45 a.m - 11:30 a.m
MODERATOR: Lori Schmidt, Business Analyst - Pitney Bowes
Measuring and Benchmarking the Caller Experience Using Analytics
Joe Alwan, AVOKE Analytics - Raytheon BBN Technologies

The Caller Experience Index quantifies the subjective caller experience by automatically measuring the frequency of dissatisfying events such as dropped calls, multiple retries, transfers, and verbalized frustration. Drawing on benchmark data Raytheon BBN compiled for several of its Fortune 100 clients, Alwan shows how to generate actionable insights for improving the caller experience.

Optimizing Customer Experiences
Jim Jenkins, Founder/CEO - IQ Services

Contact center and IVR self-service applications play significant roles in customer experience optimization. Optimizing customer experiences increases
customer loyalty, cost savings, and revenue. Deploying a quality management, measurement, and improvement process around new and existing customer-facing apps is an ideal way to meet strategic objectives by improving internal metrics and, most importantly, increasing customer satisfaction.

C202 – IVR + Mobile = Better Customer Care
11:45 a.m - 12:30 p.m
MODERATOR: Craig DiAngelo, Vice President, Enterprise and Operator Services - VoltDelta
Mobility Care: A New World of Possibilities
Graham Allen, Senior Director for Market and Portfolio Software Strategy - Convergys

By creating a cloud-based, on-demand platform with an open API, an ecosystem of developers can create new applications to overlay the current contact center infrastructure, bypassing the IVR experience while connecting to the appropriately skilled agent. Using a multichannel, on-demand platform with robust APIs, a mobility ecosystem of partners, developers, and clients are creating and delivering new mobile and client care applications such as voiceenabling the smartphone application or immediate callback from agents with context and skills.

From IVR to Mobile: Strategies for Moving Customers Faster
Laura Bramschreiber, Sr. Director Creative Business Solutions, West Interactive - West Corporation

How can you educate your customers about your new mobile care channel, drive faster adoption, and create a better customer experience? Learn about
strategies for using your IVR to actually drive mobile adoption; how to tie your voice channel to your mobile channel to create a seamless interaction across channels; and options for expanding and personalizing your self-service care options just by knowing you’ve got a mobile caller on the line.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
C203 – Call Center Knowledge Management and Automated Help
1:45 p.m - 2:45 p.m
MODERATOR: Joe Alwan, AVOKE Analytics - Raytheon BBN Technologies
Using Knowledge as a Key Differentiator in Your Voice Solution
Karen Torf, Product Manager - Voice - RightNow Technologies

Delivering exceptional customer experiences requires the delivery of up-to-date, contextual, and timely information that is consistent across all customer interaction channels. This information can be used to produce dynamic, personalized, and memorable phone experiences. Integration between the IVR and a knowledge foundation provides the ability to identify, segment, and route each caller appropriately based on the contextual information about the caller’s specific need. This information is also available to the human agent answering the call.

Automated and High Touch: Implementing Intelligent Customer Intimacy
Mike Monegan, Vice President Product Management - Cyara

“I don’t want to talk to some computer system” was a common reaction to older IVRs. Today, automation is changing the game, making IVR core to providing the high touch customers demand. See real-world industry examples of how speech self-service can actually drive high touch customer intimacy. Learn the 10 fundamental tenets and the technical infrastructure needed for a solution that uses existing enterprise data. Learn to exceed customer expectations with intelligent customer intimacy.

C204 – Enhance Smartphone GUIs With Speech
3:00 p.m - 3:45 p.m
MODERATOR: Elaine Cascio, Vice President - Vanguard Communications Corp
Mobile Multimodal Automation: A Real-Life Case Study
Ahmed Bouzid, Co-founder & President - The Ubiquitous Voice Society

By using multiple channels of communication, interaction becomes less taxing, audio instructions can be spoken to free up the visual channel, and pictures can be used when a visual is most natural. Speech can be used to respond to queries and instructions while hands are free to carry out the instructions at the very physical site where the user needs to troubleshoot. This presentation reviews the design process, the solution architecture, and demos a working multimodal automated application that interacts with a live IVR.

Adding Some Speech Spice to Smartphone Apps
Larry Murphy, Senior Manager, RTM Consulting & Professional Services - Convergys

Where does speech technology fit into smartphone applications? This session presents the current state of smartphone app design, how speech technology will shape better smartphone app design, and takes a look at the future of adding the right amount of speech at the right places. Join us to see how you can take advantage of the GUI-speech gap in smartphone application design.

Break in the Exhibit Hall
3:45 p.m - 4:30 p.m
C205 – Speech System Architectures and Platforms
4:30 p.m - 5:30 p.m
MODERATOR: Max Ball, Director, Solutions Management - Genesys Telecommunications
Patrick Nguyen, Chief Technology Officer - [24]7.ai
Mike Monegan, Vice President Product Management - Cyara

Learn how to architect and implement a global network of speech infrastructure and applications and how to assemble the most suitable components — hosted, on-premises, or partner-based — into global solutions that provide the functionality your company needs. Learn when you should consider hosted versus on-premises outbound platforms including Tellme OnDemand Outbound, XOI HVOD and VTOP, Voxeo Prophecy, Nuance Communications Outbound Notification Service, and Genesys SSG.

Sponsored By
Networking Reception
5:30 p.m - 7:30 p.m

During the reception you can visit the consultants’ lounge for one-on-one discussions over drinks. 

TRACK D: TECHNOLGY ADVANCES
D201 – Plug-and-Play Speech Application Architectures
10:45 a.m - 11:30 a.m
MODERATOR: Cliff Bell, Product Line Manager, Product Management - Genesys
Jason Unrein, Senior Product Marketing Manager - AT&T

Learn about a lightweight core engine that loads plug-ins on demand. The engine communicates through a simple, high-speed, in-process event interface with plug-ins, including ASR, NLP, TTS, dialogue, translation, search, gestures, video, and more. New plug-ins are easily added, allowing experimentation and rapid innovation. The system enables seamless integration of multiple input modalities such as speech, video, and gestures.

D202 – Managing Phone Calls
11:45 a.m - 12:30 p.m
MODERATOR: Daniel C Burnett, President - StandardsPlay
R.J. Auburn, Chief Technology Officer - Voxeo, an Aspect Company
Paolo Baggia, Director of International Standards - Loquendo

Learn how to use W3C’s Call Control eXtensible Markup Language (CCXML) for managing calls, including route calls based on data collected via a dialogue with the caller. CCXML can help establish multi-party conferences and add and remove participants. It enables organizations to place one or more outbound calls. Organizations can use the markup language to create “follow me” and “find me” applications that find the people you are trying to call by dialing their cell phone, home phone, and office phone in parallel. CCXML can enable call center applications to intelligently gather information from the caller and then pass that information on to the call center agent and more.

Sponsored By
Attendee Lunch
12:30 p.m - 1:45 p.m
D203 – Speaking With Your Home Entertainment Devices
1:45 p.m - 2:45 p.m
MODERATOR: Patrick Nguyen, Chief Technology Officer - [24]7.ai
Talking to Your TV: Tales From the Design of Xbox Kinect
Matt Klee, Interaction Designer - Microsoft Tellme

Kinect eliminates the need for a controller and instead relies on a combination of speech and body gestures to interact with the game. How can these two input modes be combined to enhance one another? How do you deal with the challenges of multiple speakers, background noise, and distance from the microphone? How do you use visual feedback to support error recovery? Gaming is all about having fun. How can we ensure that speech enhances this experience? Attend this session for the answers to these questions.

Model User Behavior for Controlling Home Entertainment
Michael Johnston, Lead Inventive Scientist - Interactions

As home entertainment systems start to offer hundreds of channels and movies on demand, subscribers can no longer practically search for content using program listings. We describe experiments using a voice-activated system that helps users find programs by genre, title, cast names, time/date, etc. Hierarchical statistical language models enable a combination of historical behavior with current popularity ratings to increase recognition accuracy. A demo of a working prototype will show the interaction between the user, the speech recognizer, and the video display.

D204 – Automatically Generate Call Flows
3:00 p.m - 3:45 p.m
MODERATOR: Emmett Coin, Speech Scientist - ejTalk
Optimize the Obvious: Automatic Call Flow Generation
David Suendermann, Principle Speech Scientist - Synchronoss

In commercial spoken dialogue systems, call flows are traditionally built by call flow designers with a predefined business logic. This talk presents a
method for automatically deriving a call flow, minimizing the average number of user turns given a business logic and a frequency distribution of call reasons. As an example, the method was applied to a call routing application whose manually built call flow is processing about 4 million calls per month and whose call reason distribution served to measure the impact of the automatic call flow generation.

Automatically Generating Call Flows
Patrick Nguyen, Chief Technology Officer - [24]7.ai

When the number of possible caller intents reaches a certain point in applications such as call steering, voice search, and FAQ, few enterprises can afford the required months of utterance analysis, grammar development, and tuning. This session presents an innovative solution to the problem of handling complex caller intents. A dialogue manager automatically generates the call flow from a model of possible intents. Examples discussed  include a large natural language application deployed in the travel industry.

Break in the Exhibit Hall
3:45 p.m - 4:30 p.m
D205 – Standard Languages for Implementing Voice Applications
4:30 p.m - 5:30 p.m
MODERATOR: Paolo Baggia, Director of International Standards - Loquendo
Daniel C Burnett, President - StandardsPlay

While VoiceXML 2.0 was the most significant speech standard in recent history, it is not the last. The convergence of the phone (as a voice device) and the internet (as a data medium) has driven standards to support both with easy-to-use designs. This talk presents the variety of speech and related standards that have been developed and are under development now, including both the W3C HTML Speech Incubator Group’s efforts and VoiceXML 3.0, explaining where they fit into the converged world we all now experience.

Sponsored By
Networking Reception
5:30 p.m - 7:30 p.m

During the reception you can visit the consultants’ lounge for one-on-one discussions over drinks. 




Connect with
SpeechTEK 2011

#SpeechTEK

Gold Sponsor
Bronze Sponsor
Monday Lunch
Sponsor
Networking
Reception Sponsor
Tuesday Lunch
Sponsor
Tuesday Breaks Sponsor
Wednesday Lunch Sponsor
Media Sponsors