April 24-26, 2017 | Washington Marriott Wardman Park

Registration

Wednesday, April 26, 2017

SUNRISE DISCUSSIONS

SD301 - Grammar Tuning for Newbies

8:00 a.m. - 8:45 a.m.
Dr. Daniel C Burnett, President, StandardsPlay

Just beginning with voice user interfaces and/or VoiceXML? Frustrated with advanced tutorials when your grammar experience consists of the words “yes” and “no”? This discussion goes old-school, attempting to hit the basics of voice grammar development that everyone else seems to know already.

SD302 - A Hike on the Slopes of Uncanny Valley

8:00 a.m. - 8:45 a.m.
David Attwater, Senior Scientist, Enterprise Integraton Gru, Enterprise Integration Group

The IVR community has created seemingly natural interactions over a couple of turns—in short, faking it. Recent assistants, such as Siri, Alexa, and Google Voice, have tended toward one-shot question/answer approaches that have little dialogue context and multi-modal presentation of information—in short, a smart-Alec with short-term memory problems in a box. This discussion session looks at these trends and ask questions about what people really want from spoken agents and where spoken dialogue systems might be headed.

SD303 - Patent & IP Update

8:00 a.m. - 8:45 a.m.
Steven M. Hoffberg, Partner, Ostrolenk Faber LLP
Dr. Jordan Cohen, Technologist and CEO, Spelamode Consulting Chief Scientist, Speech Morphing

The field of intellectual property (IP) is rapidly evolving both with changes in the law and the speech. This interactive discussion provides an overview of the intellectual property issues of automated assistants, protections available for these technologies, IP risks in commercialization, and current best practices for protecting speech technologies and defending against assertion, as well as strategies for the future.

KEYNOTE

KEYNOTE PANEL - The Future of Conversational Robots

9:00 a.m. - 10:00 a.m.
Moderator: Leor Grebler, Co-founder & CEO, Unified Computer Intelligence Corporation
Sunil Vemuri, Product Manager, Google
Roberto Pieraccini, Director, Advanced Conversational Technologies, Jibo, Inc.

Amazon Echo, Google Home, and the Jibo social robots promise to enable users to perform many useful tasks, including control devices connected with the internet such as home appliances and industrial robots; educate and train users with self-improvement activities; entertain users with passive and active games and activities; perform transactions such as pay bills; shop for goods and services; solve problems such as diagnose illnesses; debug and repair products; calculate taxes; mediate conflicts; and protect and secure home and business. This panel begins with short demonstrations of products, followed by a discussion of issues such as these: What is a conversational robot and how do they differ from other current interactive technologies? What capabilities do conversational robots have beyond just searching the web, answering questions, and presenting information? How can you replace negative perceptions of robots with positive insights? What technologies, tools, and standards will to enable widespread creation and distribution of content for conversational robots?

Break in the Customer Solutions Expo

10:00 a.m. - 10:45 a.m.

Track A - INNOVATIVE USES OF ASR

A301 - Speech Analysis Detects Early-Stage Diseases

10:45 a.m. - 11:30 a.m.
Jeff Adams, CEO, Cobalt Speech & Language

It is remarkably difficult to detect Alzheimer’s and other diseases early enough to do anything about them. Canary Speech and Cobalt Speech and Language have joined forces to develop the speech recognition technologies for these applications. Learn how ASR is being developed to detect early signs of Alzheimer’s and other diseases while using the unique business model developed by two companies.

A302 - Speech Technology for Augmenting Language Learning Experiences

11:45 a.m. - 12:30 p.m.
Emily Soward, Speech Scientist, Rosetta Stone

Gaining language proficiency in learning oral skills without an instructor can be difficult. We present some practical issues surrounding the creation of computer-assisted language learning software incorporating speech technology and describe how breaking down oral language instruction into machine-solvable problems allows speech interfaces to play the role of instructor. We also discuss how to provide computer-generated feedback for pronunciation training. A tight interplay between UI/UX design and core speech technology is key to creating immersive speech experiences for users.

Last Chance to visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

Track B - SELF-SERVICE TECHNOLOGIES

B301 - Blending Self-Service & Assisted Service

10:45 a.m. - 11:30 a.m.
Thomas Hebner, Sr. Director UI Design, Nuance Communications

When using an automated speech system, there is often a need for an “assist” from a human. This discussion identifies benefits of cooperation between virtual assistants and human agents to improve the customer experience and, ultimately, create a more informed self-service experience. We also explore the latest trends toward a blended approach and the latest systems designed to enable seamless interplay between virtual assistants and humans and discuss how human agents organically train automated machines.

B302 - PANEL: Adding Visuals to Voice

11:45 a.m. - 12:30 p.m.
Moderator: Crispin Reedy, Senior VUI Design Consultant, Versay
Thomas Wilson, Self Service Practice Manager, Arrow Systems
Chris du Toit, Chief Marketing Officer, Marketing, Jacada
Jo Roman, Patient Health Educator, Clinica Tepeyac

Traditional IVR systems limit users to speaking and listening. Enhancing voice-only communications with visual information, including menus, directories, photos, diagrams, fill-in-forms, receipts, and tickets, adds new capabilities to self-help systems. Security may be enhanced by using both voice speaker identification and face recognition. Developers who have build visual/voice systems relate their own experiences developing and using voice with visual systems and provide advice about adopting a voice with visual system for an organization.

Last Chance to visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

Track C - METRICS

C301 - PANEL: Explaining Tuning Data to Managers

10:45 a.m. - 11:30 a.m.
Moderator: Mark W Stallings, Managing Partner, VUI Design Practice, Forty 7 Ronin Inc.
David Claiborn, Director Voice Portal Technology, United Healthcare
Rich Garrett, Senior Solutions Consultant, [24]7
Becky Stallings, VUI Design Lead, Verizon Business

Tuning voice systems is necessary for maximum performance and user satisfaction. How is data used by experts to tune systems translated into meaningful data for business managers? What needs to be considered in a tuning effort?

C302 - Business Intelligence: The Most Meaningful Metrics

11:45 a.m. - 12:30 p.m.
Deborah Rapsinski, Chief Customer Experience Officer, Think Tank Partners

Contact centers across all verticals and all sizes struggle to decide which metrics are the most meaningful data to track trends and use as indicators of customer satisfaction and omni-channel application performance. This presentation examines which metrics most clearly indicate customer experience, health, and perfor-mance of applications and how business leaders can incorporate these metrics into their long-term strategy.

Last Chance to visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

Track D - INFRASTRUCTURE

D301 - Now Trending: Voice Biometrics

10:45 a.m. - 11:30 a.m.
Advait Deshpande, Senior Product Director, Enterprise, Nuance Communications

An overview of voice biometrics, including how the technology uniquely balances security and convenience, while bringing a new level of personalization to customer service; We compare the features and benefits of voice biometrics technology to other authentication technologies, and explore use cases for biometrics and real-world examples of deployments – from large financial institutions, telecom providers, government organizations, and more; We discuss how consumer behavior and preferences impact adoption of voice biometrics.

D302 - Defining an Inter-IVA Communication Standard

11:45 a.m. - 12:30 p.m.
Brian Susko, Vice President, Software Engineering, True Image Interactive

There is a growing need for a standardized channel of commu-nication between intelligent virtual assistants (IVA). Without this standardized channel, companies that implement IVAs isolate their implementations from other IVAs. We discuss the need for a gateway (or routing) IVA that can pass questions or commands to other IVAs. We also discuss the need for an IVA registry that can be searched for IVAs categorized by many tags, such as the specialized domain of its questions.

Last Chance to visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

SpeechTEK University - Wednesday, April 26, 2017

SpeechTEK University courses provide in-depth, 3 hour seminars on compelling topics for speech technology and information technology professionals.  Experienced speech technology practitioners teach each class in an intimate classroom setting to foster an educational and structured learning experience. If you are considering deploying a speech application, looking to increase your knowledgebase in one of the key areas below, or you simply need a speech technology refresher, attend a SpeechTEK University course. SpeechTEK University seminars are separately priced or may be purchased as part of your conference registration.

STKU-4 - Using a Data-Driven Approach to Design, Build, & Tune Spoken Dialogue Systems

1:30 p.m. - 4:30 p.m.
David Attwater, Senior Scientist, Enterprise Integraton Gru, Enterprise Integration Group

This workshop addresses the whole lifecycle of using data-driven approaches to design, train, and tune practical dialogue systems. The workshop focuses on natural language solutions in call center applications, but many of the techniques are equally applicable to building robust intelligent assistants. Topics covered in the workshop include using live Wizard-of-Oz techniques to test dialogue strategies and gather early customer language for semantic design; managing data collections; semantic annotation (including multi-dimensional semantics); training, testing, and tuning grammars; and data-driven approaches to optimizing dialogue and system performance.

STKU-5 - Deep Neural Networks in Speech Recognition

1:30 p.m. - 4:30 p.m.
David L Thomson, VP Speech Research, Interactions, LLC

Deep learning is setting new standards of accuracy for financial projections, image processing, advertising, translation, games, and virtually every field where we use massive databases to train systems for estimation, classification, and prediction. This tutorial reviews recent advances in machine learning with a focus on Deep Neural Nets (DNNs) for speech recognition and natural language processing. The session includes demonstrations and hands-on exercises. We recommend that participants bring a laptop. Attendees gain an understanding of DNN fundamentals, how they are used in acoustic and language modeling, and where technology appears to be headed.

STKU-6 - Developing Multimodal Applications for New Platforms

1:30 p.m. - 4:30 p.m.
Dr. Deborah Dahl, Principal, Conversational Technologies

Multimodal interfaces, combining speech, graphics, and sensor input, are becoming increasingly important for interaction with the rapidly expanding variety of nontraditional platforms, including mobile, wearables, robots, and devices in the Internet of Things. User interfaces on these platforms will need to be much more varied than traditional user interfaces. We demonstrate how to develop multimodal clients using standards such as WebRTC, WebAudio, and Web Sockets and the Open Web Platform, including open technologies such as HTML5, JavaScript, and CSS. We also discuss integration with cloud resources for technologies such as speech recognition and natural language understanding. Attendees should have access to a browser that supports the Open Web Platform standards, for example, the current versions of Chrome, Firefox, or Opera. Basic knowledge of HTML5 and JavaScript would be very helpful.

STKU-7 - Voice Experience Design for Alexa Skills

1:30 p.m. - 4:30 p.m.
David Bliss, Principal Design Technologist, Amazon
Phillip Hunter, Head of User Experience, Amazon Alexa Skills Kit

Join us to learn about creating within the Alexa ecosystem using the Alexa Skill Kit. We cover general capabilities and use real-world examples of skills to illustrate voice experience design best practices. Attendees experience prototyping techniques and work in groups to define and prototype a skill. Before coming, please sign up at developer.amazon.com. And be sure to bring your laptop!

Co-Located With

Platinum Sponsors

Gold Sponsors

Corporate Sponsors

Monday Keynote Lunch Sponsor

Tuesday Keynote Lunch Sponsor

Media Sponsors

Conference Videos

Watch the 2017 preview below or view videos from previous SpeechTEK events here.