SpeechTEK.com home
Monday Tuesday Wednesday
Sunrise Discussions Customer Case Studies Speakers
Keynotes Registration SpeechTEK University
SpeechTEK 2015 - Monday, August 17, 2015
SD101 – How to Promote a Culture of Innovation in the Workplace
8:00 a.m - 8:45 a.m
Daniel O'Sullivan, CEO - Gyst

How do you get your team to want to create, invent, and innovate? In this session, we discuss proven strategies for promoting a culture of innovation at your organization. You learn which techniques work and which ones to avoid. We cover unique ways to encourage your employees to contribute to the creation process. You also learn about patents, copyrights, and trademarks and how to protect your company’s intellectual property in a rapidly changing technology environment.

SD102 – What Are Patents? And How to Defend Against Assertion
8:00 a.m - 8:45 a.m
Steven M. Hoffberg, Of Counsel - Tully Rinckey, PLLC

The patent field remains turbulent, especially with respect to patents that cover so-called “abstract ideas.” Other evolving issues include formalities, obviousness, postgrant administrative proceedings, and litigation. We begin with a primer on basic principles, new law, and strategies and conclude with a discussion of these evolving issues.

SD103 – Understanding Speech Recognition Accuracy
8:00 a.m - 8:45 a.m
Matt Warshal, Sales Engineer, Client Services - Lumenvox LLC

When evaluating speech solutions, decision makers often ask, “What are the accuracy numbers?” By itself, “accuracy” is an overloaded term in the speech industry and often means different things to different people. This presentation defines a number of metrics to indicate the performance of a speech application and educates decision makers on how to make good solution decisions.

SD104 – Documenting Natural Language Speech Solutions
8:00 a.m - 8:45 a.m
David Attwater, Senior Scientist - Enterprise Integration Group

Deployment of “say anything” or natural language technologies for call centers is now mainstream, but the standard methodologies for documenting the VUI design may not be appropriate. Find out how our peers document the VUI in a way that is complete and can be used as a functional design but is also understandable to non-IT stakeholders.

Smartcuts: How to Accelerate Success That Matters in Business & Life
9:00 a.m - 10:00 a.m
Shane Snow, Best-selling author, Smartcuts: How Hackers, Innovators, and Icons Accelerate Success - Chief Creative Officer of Contently American Program Bureau - Speaker

How did Jimmy Fallon get his big break on SNL as an unknown kid from Upstate New York? How did change makers turn Finland’s school system around? How did YouTube tycoon Michelle Phan, the fastest-rising U.S. presidents and the second African American shoe designer in history all climb to the top and achieve incredible things faster than expected? In this high-energy session, award-winning journalist and entrepreneur Shane Snow brings research and stories to life, showing attendees a framework for finding smarter paths to big goals and breakthroughs and how to beat the plateau and generate momentum in work and life.

A101 – PANEL: Virtual Agents for All?
10:15 a.m - 11:00 a.m
MODERATOR: Bruce Pollock, Vice President, Strategic Growth and Planning - West Interactive
Samrat Baul, Senior Director, Application Design - [24]7
Jason Mars, Assistant Professor, EECS Dept. - University of Michigan-Ann Arbor
Wayne Scholar, CTO and co-founder - GetAbby

Virtual agents (VAs) seem to be everywhere, but should every mobile app be built with a VA interface? What does an organization gain by using a VA, and what are the costs of doing so? Do VAs make sense for all organizations and all customers? Will each organization have its own VA, or will there be one unified agent interface for all of your apps?

A102 – Unifying the Customer Experience With Multimodal Virtual Assistants
11:15 a.m - 12:00 p.m
Michael Johnston, Lead Inventive Scientist - Interactions

Speech-enabled virtual assistants provide businesses with a compelling solution to the challenge of managing constellations of multiple interconnected services. Mobile virtual agents enable a single point of entry for a broad range of options without overcomplicating the interface, and multimodal interaction includes presentation of visual displays that are cumbersome to convey with voice alone. This presentation explains the technical underpinnings of creating effective multimodal virtual assistants and illustrates their use with demonstrations from multiple enterprise verticals.

12:00 p.m - 1:15 p.m
A103 – Innovative Strategies for Behavior Change and User Adoption
1:15 p.m - 2:00 p.m
Eduardo Olvera, Sr. Manager & Global Emerging Technology Lead, UI Design, Professional Services - Nuance AVIxD

Ideally, self-service allows customers to complete goals efficiently while saving companies money, but, in reality, users often avoid self-service and find ways to “game the system” to their advantage. How can we design self-service to drive user adoption and have a positive long-term effect? This talk explores some onboarding and gamification techniques applied to mobile virtual assistants which have produced remarkable benefits for both users and companies.

A104 – How Does John Doe Engage With a Mobile Personal Assistant?
2:15 p.m - 3:00 p.m
Rebecca Jonsson, Chief Researcher - Artificial Solutions

Siri and Google Now have brought speech recognition and natural language into the daily lives of millions of people around the world. This session reviews what we have learned about how average users interact with mobile personal assistants, including which apps they control by voice, the format of their utterances, how they react to misunderstandings, and differences in how users speak by country. The presentation also reviews real-world examples and lessons learned.

A105 – PANEL: Using Virtual Agents for Training
3:15 p.m - 4:00 p.m
MODERATOR: James A. Larson, Vice President - Larson Technical Services
Lewis Johnson, President and CEO - Alelo Inc.
Roberto Sicconi, CTO - TeleLingo CSO at Infermedica
David Topolewski, CEO and Mobile Learning Evangelist - Qooco

See live demonstrations of voice-enabled virtual role-play training applications that are natural and realistic, immersing users in both visual and verbal learning activities. See how artificial agents help users learn at their own time and pace, be it from home, office, or while they travel. Automated role-playing applications are convenient, consistent, and produce measurable results. This technology is used for a wide range of business communication skills, including sales and customer service.

A106 – Solution Sessions
4:15 p.m - 5:00 p.m

See 40-minute presentations about some of the latest technological developments in speech technology from companies at the conference. Solution Sessions are open to all full conference attendees and Discovery Pass holders.

Let’s Get Personal! Delivering Great IVR and Call Routing Customer Experiences
Stuart Selby, Head of Contact Delivery and IVR - Barclays' Retail Bank

Presenting a case study on optimizing and personalizing the IVR and call routing experience, Stuart Selby will outline a methodology for flexibility, rapid innovation, and continuous measurement and monitoring of the customer experience with Cyara. He will also give a detailed approach to optimize speech steering and personalized data to meet customer needs.

B101 – First Principles in Voice Interaction Design
10:15 a.m - 11:00 a.m
MODERATOR: Carrie Claiborn, Senior VUI Architect - Interactions
Micha Baum, Senior Principal Speech Scientist - Nuance Communication
Kristie Goss-Flenord, Consultant, Human Factors - Convergys
Randell Neuman, Staff UX Designer - Genesys

Join us for a discussion of the baseline principles that underlie voice interaction design. Learn how to focus on spoken language, design with our ears, and avoid common mistakes made by prioritizing the visual representations of speech designs. Panelists describe how to make sense of application performance data to understand which key performance indicators correlate to better experiences for your customers and explain the vital role of design in any project with a speech interface.

B102 – PANEL: Designing for Your Customers
11:15 a.m - 12:00 p.m
MODERATOR: Kristie Goss-Flenord, Consultant, Human Factors - Convergys
Jim Milroy, Human Factors Solutions Consultant - West Interactive Services
Nandini Stocker, Sr. Product Design Manager - FlipKart

Your organization’s customers are unique, but often speech technology design fails to take their uniqueness into account. Panelists in this session share methods for capitalizing on the specific characteristics of your customers. Learn a proven technique for how to craft a persona to effectively engage with customers. Discover how to design a solution based on attributes and behavior of customers that predicts their intent and offers tailor-made call paths with different levels of self-service.

12:00 p.m - 1:15 p.m
B103 – PANEL: Using SLMs to Create Natural Language Experiences
1:15 p.m - 2:00 p.m
MODERATOR: Leslie Carroll Walker, Senior Manager, UI Design, Global Professional Services, Customer Interaction Technology - Convergys
Jenny Burr, Sr. Manager, Analytic Consulting - Convergys
Alex Christodoulou, User Interface Designer - Nuance Communications Inc.
Jenni McKienzie, Voice Interaction Designer - SpeechUsability

More organizations are considering natural language implementations supported by statistical language models (SLMs), but often lack a solid understanding what goes into creating this kind of solution. This session explains the basics of how SLMs work and helps you gauge the level of effort in a natural language project to determine if natural language will meet your needs. Learn how natural language affects the entire user interface, and discuss best practices in measuring performance, utterance requirements, and tagging.

B104 – Name Recognition: Challenges and Solutions
2:15 p.m - 3:00 p.m
Dmitry Sityaev, Principal Speech Scientist, Engineering R&D - Genesys

Recognition of names presents many challenges for speech recognition. This presentation addresses the issues encountered when building solutions for capturing first and last names. From sourcing data to fine-tuning the grammars, we look at how the application of various recognition techniques helps to achieve best results in terms of recognition accuracy and coverage. We also discuss some back-off strategies that can be used to further increase the success rate of the name collection task.

B105 – PANEL: The How and Why of Speech Application Tuning
3:15 p.m - 4:00 p.m
MODERATOR: Amy Goodwin, Senior Speech Technologist - West Corporation
Jenny Burr, Sr. Manager, Analytic Consulting - Convergys
Jeffrey (Jeff) D Hopper, Vice President, Client Services - LumenVox, LLC
Jesse Montgomery, Sr Speech Technologist - Theatro Labs
Julie Underdahl, Senior User Interface Designer - Genesys

Speech application tuning is a vital, but often a misunderstood, part of building and maintaining successful speech applications. Panelists in this session help you understand how tuning improves recognition accuracy and customer experience and provide hands-on tips for how and when to tune to achieve maximum benefits. Learn how to incorporate analytics data to further increase your return on investment from tuning and how a smaller, focused tuning process can allow you to rapidly realize measurable improvements.

C101 – Humans as Speech Recognizers
10:15 a.m - 11:00 a.m
David L Thomson, VP Speech Technology - CaptionCall

Organizations have begun to offer customer service using a combination of human and machine speech recognition. Humans provide unprecedented accuracy and enable formerly impossible features, but are more costly. This presentation describes strategies for how humans can efficiently work together with machines and gives examples of new services the union now makes possible. The session reviews case studies of deployed services across multiple companies and vendors; covers issues such as accuracy, automation rates, and response latency; and gives a glimpse into where this approach may head in the future.

C102 – Infotainment Through Conversational Apps: Possibilities and Challenges
11:15 a.m - 12:00 p.m
William Meisel, President - TMA Associates

In today’s conversational infotainment apps, users make choices by voice or text about what happens next, but content is rather limited. Offering more varied content allows for deeper consumer engagement, but presents a challenge of providing tools to allow nontechnical individuals to develop large amounts of additional content and, at the same time, specify the required technical information to create robust natural language dialogues. The talk discusses both the possibilities and challenges, including the form that tools for enabling conversational apps could take.

12:00 p.m - 1:15 p.m
C103 – Mitigating Driver Distraction: Combining Speech With a Haptic Controller
1:15 p.m - 2:00 p.m
Tom Schalk, Vice President of Voice Technology - Sirius XM

With a greater quantity of infotainment flowing to the vehicle, the solution to reducing driver distraction likely resides in combining a variety of driver interfaces to fit specific tasks. This presentation describes a multimodal user interface that leverages speech and a haptic controller that is better-suited for tasks that require simple user interaction. We demonstrate an interface in which visual information is presented in the form of iconic tiles that enable content discovery and selection with minimal visual dependency.

C104 – Sirius: An Open End-to-End Voice and Vision Personal Assistant (Like Siri)
2:15 p.m - 3:00 p.m
Jason Mars, Assistant Professor, EECS Dept. - University of Michigan-Ann Arbor

As user demand scales for intelligent personal assistants (IPAs) such as Apple’s Siri, Google’s Google Now, and Microsoft’s Cortana, we are approaching the computational limits of current datacenter architectures. In this presentation, we present the design of Sirius, an open end-to-end IPA web-service application that accepts queries in the form of voice and images, and responds with natural language. We describe how we used Sirius to evaluate future accelerator-based server architectures.

C105 – Building Skills for a Conversational Robot
3:15 p.m - 4:00 p.m
Roberto Pieraccini, Director of Engineering - Google Switzerland Google
Jonathan Ross, Director, SDK development - Jibo, Inc.

This presentation describes the issues related to building conversational applications for a social robot. Issues range from the classical speech recognition problems, including background noise and far-field microphones, to the integration with other interaction channels and modalities. Finally, we give a preview of a software development kit that will be available to general developers to build skills for a commercial robot.

C106 – Solution Sessions
4:15 p.m - 5:00 p.m

See 40-minute presentations about some of the latest technological developments in speech technology from companies at the conference. Solution Sessions are open to all full conference attendees and Discovery Pass holders.

Effortless Automation: How a Fortune 100 Telecom Looked to Traditional Channels as a Breakthrough Customer Experience Differentiator
Mark Leonard, EVP Strategic Accounts - Interactions LLC

Your next amazing customer service representative may be an automated virtual assistant. Attend this session to learn about a Fortune 100 Telecom that recently unveiled a natural language phone-based virtual assistant to replace a traditional IVR system. Through the power of a conversational virtual assistant, the company has witnessed a reduction in agent call volumes while improving the customer experience by providing self-service technology that consumers adopt quickly and effectively.

D101 – Cloud Translation
10:15 a.m - 11:00 a.m
Chester Anderson, VP Business Development - Translate Your World

Over 34 million Americans speak Spanish as their primary language; would your bottom line be improved by supporting Spanish (and other languages) in your call centers and IVR systems? This talk presents the level of voice translation that can be attained using Windows Speech Recognition, Google Speech, Dragon 13, and Nuance Mobil Speech Recognition coupled with an automatic translation system. This presentation describes the architecture of the merger of cloud-based ASR and machine translation and discusses the many possible uses of a cloud translation product.

D102 – Use Analytic Data to Detect Dialogue Hotspots
11:15 a.m - 12:00 p.m
Dominique Boucher, CTO - Nu Echo Inc.

Ever tried finding where users are struggling in your speech services? Those dialogue hotspots are the main source of end-user dissatisfaction. Root-cause analysis and fixing dialogue hotspots are expensive and time-consuming processes. This talk describes a service doctoring platform to detect those hotspots in a semi-automated way and suggests improvements. The platform employs techniques adapted from the speech analytics industry, such as age, gender, and emotion detection.

12:00 p.m - 1:15 p.m
D103 – Recent Deep Neural Net Techniques for Speech Recognition
1:15 p.m - 2:00 p.m
David L Thomson, VP Speech Technology - CaptionCall

Every major speech recognition vendor has been quietly replacing Gaussian mixture models and other ASR components with deep neural nets (DNNs). This presentation explains how DNNs work and why they are so much better than previous methods. We review the latest techniques from several research centers and how much accuracy is gained with each method. The presentation includes live demos and forecasts where the technology is likely to head in the next 3 years.

D104 – Driving engagement with your app via voice
2:15 p.m - 3:00 p.m
Sunil Vemuri, Product Manager - Google

With voice interfaces, users are now able to perform web searches, app content searches, and take actions by telling their phones what to do. Learn how you can leverage Google's advancements in language recognition and semantic understanding to enable voice interactions with your app. This session will cover System Voice Actions, which are specified by the Android system, and Custom Voice Actions, where you define the language.

D105 – Remote-Free TV and Hands-Free In-Car Voice Interactions
3:15 p.m - 4:00 p.m
Q. Peter Li, President - Li Creative Technologies, Inc.

Today, a remote is necessary for controlling a TV from a distance. Similarly, a push-to-talk button is necessary for in-car voice recognition. Neither of these solutions is natural or intuitive. We discuss a new DSP solution that cancels TV/ radio sound and background noise and sends clean speech- to-speech recognizers. Users speak directly to the TV or radio without muting the TV or radio sound. This technology enables new interface designs and applications, as well as new markets.

D106 – Solution Sessions
4:15 p.m - 5:00 p.m

See 40-minute presentations about some of the latest technological developments in speech technology from companies at the conference. Solution Sessions are open to all full conference attendees and Discovery Pass holders.

GRAND OPENING RECEPTION in the Customer Solutions Expo
5:00 p.m - 7:00 p.m

Join your peers on Monday after the sessions for the opening of the Customer Solutions Expo - featuring the CRM Evolution, SpeechTEK, and Customer Service Experience Showcases. Visit with conference sponsors, exhibitors, speakers, and other attendees while enjoying light hors d'oeuvres and drinks.

Connect with
SpeechTEK 2015

Platinum Sponsors
Gold Sponsors
Corporate Sponsors
Social Media Sponsor
Tuesday Keynote Lunch Sponsor
Media Sponsors
Co-located with: