SpeechTEK.com home
 
Monday Tuesday Wednesday
Sunrise Discussions Customer Case Studies Speakers
Keynotes Registration SpeechTEK University
SpeechTEK University
SpeechTEK University courses provide in-depth, 3-hour seminars on compelling topics for speech technology and information technology professionals. Experienced speech technology practitioners teach each seminar in an intimate classroom setting to foster an educational and structured learning experience. If you are considering deploying a speech application, looking to increase your knowledgebase in one of the key areas below, or you simply need a speech technology refresher, attend a SpeechTEK University course. These courses are separately priced or may be purchased as part of your conference registration.
SpeechTEK 2015 - Sunday, August 16, 2015
STKU-1 – Natural Language Processing
1:30 p.m - 4:30 p.m
Deborah Dahl, Principal - Conversational Technologies

Natural language processing (NLP) technology is becoming widely recognized as a key component of conversation- al systems. More speech and multimodal applications are using NLP. NLP resources are becoming increasingly avail- able through web services. Online NLP services can supply natural language capabilities to a range of mashups for both speech and text-based applications. This session introduc- es participants to the possibilities for using NLP in their own applications. Following a brief overview, most of the session focuses on interacting with online NLP APIs that perform tasks such as classification, part-of-speech tagging, parsing, and named entity recognition. Attendees can try their hand at supplying their own training data for a simple statistical NLP classifier. For details, seeconversational-technologies.com/SpeechTEKUniversity.html.

STKU-2 – Introduction to Speech Technologies
1:30 p.m - 4:30 p.m
James A. Larson, Vice President - Larson Technical Services
Bruce Balentine, Chief Scientist Emeritus - Enterprise Integration Group
Crispin Reedy, Director, User Experience - Versay Solutions

Designed for attendees new to the speech technology arena, this tutorial overviews the speech technologies, user interface design principles, and the markets for this fast-evolving industry. It highlights the key speech technologies, including automatic speech recognition (ASR), text-to-speech synthesis, voice biometrics, and dialogue management. Learn how these technologies are used to develop interactive voice response systems; voice-enabled personal assistants; voice-enabled mobile applications; assistive applications; and voice applications for use in the car, TV, and other devices/appliances. Learn about the process of developing speech applications, principles, best practices, and the integration of speech with GUI and other UI modalities. Understand the key players in the speech technology ecosystem.

STKU-3 – Omnichannel Design: A Hands-On Workshop
1:30 p.m - 4:30 p.m
Samrat Baul, Senior Director, Application Design - [24]7

In this hands-on workshop, the presenters discuss their experiences with channel hopping issues based on live customer data. Attendees experiment with various design strategies to make them more usable and seamless. In the first part of the session, attendees experience live omnichannel prototypes that illustrate omnichannel design strategies. Using these live experiences, we can parse through usability questions such as these that are hard to discern in a void: Are proactive IVR prompts that reference online behavior creepy? How should online behavior influence the IVR experience? The latter half of the session is a guided forum based on the experiences gleaned from the prototypes. Attendees can discuss and voice their reactions and views on how to best design omnichannel solutions from a user-centered point of view.

SpeechTEK 2015 - Thursday, August 20, 2015
STKU-4 – Developing Multimodal Applications With the Open Web Platform
9:00 a.m - 12:00 p.m
Deborah Dahl, Principal - Conversational Technologies

Multimodal interfaces combining speech, graphics, and sensor input are becoming increasingly important for interaction with the rapidly expanding variety of nontraditional platforms, including mobile, items in the Internet of Things, wearables, and robots. While speech technologies on the Open Web Platform are still in their early stages, standards like WebRTC, WebAudio, and Web Sockets are providing the audio handling support required by speech applications. This session demonstrates how to develop multimodal clients using the Open Web Platform and discusses integration with cloud resources for technologies such as speech recognition and natural language understanding. Attendees should have access to a browser that supports the Open Web Platform standards such as the current versions of Chrome, Firefox, or Opera. We cover the following topics: motivation for multimodal applications (demos and sample applications); graphical multimodal tools (HTML5, JavaScript, and CSS); speech tools in the browser (audio standards and speech APIs); cloud processing (speech and natural language processing services); and building a multimodal application.

STKU-5 – Introduction to Application Tuning
9:00 a.m - 12:00 p.m
Judi Halperin, Principal Consultant and Team Lead, Global Speech Engineering - Avaya Inc.

Exploring the data your system generates can provide insight into the refinements that are needed to optimize application performance, thereby supercharging self-service to increase customer satisfaction and lower costs. This involves analyzing three key areas: recognizer performance, application performance, and the caller experience. This session begins with a high-level discussion of the application tuning process and its goals, then dive into the details. Tuning concepts are defined, and common application issues are examined using real-world data. The causes of these issues are discussed and potential resolutions are then determined as a group.

STKU-6 – Best Practices for Natural Language Design
9:00 a.m - 12:00 p.m
David Attwater, Senior Scientist - Enterprise Integration Group

This in-depth tutorial focuses on best practices in natural language design for call center applications. Following attendance of the course, attendees can expect an in-depth understanding of the following core issues: What is natural language and how does it differ from other spoken language technologies? What are the main business benefits of natural language—and might it be better to use other approaches? How does natural language enable you to optimize and re-engineer the center operating model? What are the best prac- tices for prompting for natural language responses, including how to mix it with menus and touchtone? How do customers really speak and behave, and how does their language map to the tasks you may be trying to solve? What techniques can you use to manage ambiguity and low confidence to keep the conversation on track?

STKU-7 – Using WebRTC
1:00 p.m - 4:00 p.m
Daniel C Burnett, President - StandardsPlay

WebRTC is the newest communications standard, bringing high-quality voice and video to web browsers, mobile devices, and beyond without the need for bloated plug-ins. Daniel Burnett, one of the authors of the standard and co-author of the leading WebRTC book, explains the concepts underlying WebRTC while demonstrating those concepts in working, downloadable code. The workshop covers the basics of Voice over IP technology, how WebRTC extends this technology to work in web browsers and other compatible devices, and how WebRTC deals with the complexity of NAT traversal and shows how easy it is to use WebRTC’s data channel and a fabulous new feature for low-latency messages such as web chats and peer-to-peer games status updates. Finally, for those who really want to deploy something quickly, the workshop wraps up with a sampling of the third-party tools available to developers and system integrators.

STKU-8 – Advanced Dialogue Techniques for Intelligent Virtual Agents
1:00 p.m - 4:00 p.m
Emmett Coin, Speech Scientist - ejTalk

Intelligent virtual agents require advanced dialogue techniques. This workshop identifies and describes four interrelated dialogue techniques that enable virtual agents to act intelligently. This workshop goes over dynamically adjusting the dialogue context, based on recent user input; rephrasing the agent’s responses so they don’t sound repetitive; automatically generating conversational ellipsis (wording redundant with previous parts of the conversation) in the agent’s response for naturalness and efficiency; interpreting badly formed user utterances; and integrating information expressed by the user via multiple modalities. It presents examples illustrating each of these techniques using a simple graphic “blockly” notation. Emmett Coin also illustrates how to combine these techniques to form more powerful dialogue managers, using a layered approach to a dialogue design. This layering strategy is essential to manage dialogue design and implementation complexity as agent interactions become more open. Bring your laptop for a hands-on experience. For instructions on downloading the browser-based, blockly-based editor, a dialogue engine server connection, and example code to be used in this course, visitejtalk.com/v2/pres/STU2015.html.

STKU-9 – Writing Better Voice Prompts
1:00 p.m - 4:00 p.m
Kristie Goss-Flenord, Consultant, Human Factors - Convergys

Prompt writing is a critical skill for any voice user interface designer to master. This tutorial provides practical, in-depth instruction in prompt writing for anyone tasked with scripting prompts for voice-enabled systems. The session begins with a review of many of the best practices and guidelines behind writing prompts and helps you understand what’s truly important in writing successful voice prompts. We consider how individual word choice and word order can affect the success of an entire prompt, how prompts in sequence influence each other, and when to ignore your middle school grammar teacher to make your prompts sound like actual spoken language. Then get ready to put it all into practice: During the second part of the session, we break into small, interactive groups to tackle real-world prompt issues with lots of examples. We improve upon existing prompts, write some based on requirements, and look at pairs of prompts to analyze the differences and what makes one better than the other.




Connect with
SpeechTEK 2015

Platinum Sponsors
Gold Sponsors
Corporate Sponsors
Social Media Sponsor
Tuesday Keynote Lunch Sponsor
Media Sponsors
Co-located with: