SpeechTEK 2019 - Track A: Managers

Track A, generally targeted to managers, is devoted to conversational dialogues and personal assistants, how to design, build, and test them, and avoid the pitfalls and roadblocks of this new technology. It explains what you need to know to make informed decisions about building conversational agents.

 

Monday, Apr 29

Track A: Managers

 

A101. A Comprehensive Guide to Technologies for Conversational Systems

Monday, April 29: 10:30 a.m. - 11:15 a.m.

Choosing the right platform for a particular application is critical. We focus on the capabilities of current natural language understanding and dialog management technologies. We present evaluation criteria and compare and contrast a number of popular platforms. We consider technical and non-technical features such as cost, vendor commitment, support, and cloud vs. on-premise operation. We review the state of the art for these technologies and conclude with comments on advanced dialog features that are currently being researched.

Speakers:

, Principal, Conversational Technologies

, CEO, Nu Echo Inc.

 

A102. Building Customer Service Digital Assistants

Monday, April 29: 11:30 a.m. - 12:15 p.m.

An automated digital assistant—whether a text chatbot or a voice-interactive part of your telephone customer service—can help customers receive quick results and help agents focus on more complex tasks. Today’s natural language technology can make the experience fast and pleasant—when done properly. Like any evolving technology, it can also be done poorly. This talk discusses how to achieve effective solutions using automated natural language interaction.

Speaker:

, President, TMA Associates

 

Keynote Lunch

Monday, April 29: 12:15 p.m. - 1:15 p.m.

Check back for the latest details.

 

A103. Best Practices for Bootstrapping an NLU System With Generated Sentences

Monday, April 29: 1:15 p.m. - 2:00 p.m.

Good NLU accuracy requires a sizeable training corpus made of sentences that represent expected responses from real users. How are new chatbots developed when there is little or no training data available? We present best practices to generate a NLU training corpus to easily train a fairly robust NLU system that can be used in a customer-facing chatbot, which makes it possible to quickly start collecting real sentences from real users.

Speaker:

, CEO, Nu Echo Inc.

 

A104. Creating Superior Conversations With Intelligent Assistants

Monday, April 29: 2:15 p.m. - 3:00 p.m.

We talk with text to our smartphones, smart speakers, and other devices in a conversational style; but how conversational are these interfaces? What happens when things go wrong—when the user or agent needs to ask for clarification, if something needs to be corrected, or something has been misunderstood? Learn whether and how these “edge cases” are being handled using available tools and what new approaches are being developed in research labs.

Speaker:

, Professor, Ulster University

 

A105. Expert Perspectives

Monday, April 29: 3:15 p.m. - 4:00 p.m.

Learn more about the latest technological developments in speech technology from leading-industry companies in Expert Perspectives sessions. These sessions are open to all full-conference attendees and Networking Pass holders. Check back for session details or follow #SpeechTEK on Twitter for updates.

 

A106. Marry Visuals With Bots for Twice the Customer Experience

Monday, April 29: 4:15 p.m. - 5:00 p.m.

Phonebots are especially useful when enhanced with visual information, such as an instantly viewed, tappable menu of options instead of a long sequence of questions or a spoken list of options. Adding a visual component—maps, photos, video snippets, menus, graphics, diagrams, short documents—to an ordinary phone call clarifies the users’ requests and can encourage customers to stay on the automated bot to reach a satisfactory resolution.

Speaker:

, Strategic Sales Consultant, Radish Systems

 

Tuesday, Apr 30

Track A: Managers

 

A201. Expert Perspectives

Tuesday, April 30: 10:45 a.m. - 11:30 a.m.

Learn more about the latest technological developments in speech technology from leading-industry companies in Expert Perspectives sessions. These sessions are open to all full-conference attendees and Networking Pass holders. Check back for session details or follow #SpeechTEK on Twitter for updates.

 

A202. PANEL: Problem Solving in the Age of Microservices

Tuesday, April 30: 11:45 a.m. - 12:30 p.m.

When all your technology resides in-house or with single vendor, you can find all the data you need to monitor performance, resolve errors, and make improvements. However, if you rely on microservices from multiple vendors, without careful planning, you might not even notice errors. This talk focuses on strategies and possible solutions to solving problems in a multi-vendor/microservices environment.

Moderator:

, President, Disaggregate Corporation


Speaker:

, CTO & Technical Lead, TEN DIGIT Communications

 

Keynote Lunch - sponsored by Google Cloud

Tuesday, April 30: 12:30 p.m. - 1:45 p.m.

Check back for the latest details.

 

A203. Delivering AI Directly Within the Telephony Fabric

Tuesday, April 30: 1:45 p.m. - 2:30 p.m.

The talk shows basic AI and ML architectures and discusses current AI limitations. Some environments such as voice networks require a different and unique AI approach to deliver value. We discuss the topic of ambient AI and how it differs from Siri or Alexa. Finally, we give the audience a few pointers about cloud-based tools that make AI accessible to any developer, while showing a demo. This presentation is a collaboration between Phone.com and Second Mind.

Speaker:

, EVP & CTO, Phone.com, Inc.

 

A204. Conversational AI in a Disconnected World

Tuesday, April 30: 2:45 p.m. - 3:30 p.m.

When disconnected or occasionally connected to the internet, delivering a conversational experience has its own unique requirements. How to deploy a large vocabulary speech recognition engine? How to update the content? In this session, we explore multiple options in disconnected and sometimes connected technologies, and demonstrate multiple capabilities from multiple vendors. Attendees come away with a better understanding of what is capable in a disconnected world and some of the architectures and technologies that can make this happen.

Speaker:

, Senior Creative Technologist, Ship Side Technologies, Virgin Voyages

 

A205. AI-Powered Customer Experience Testing for an AI-Powered World

Tuesday, April 30: 4:15 p.m. - 5:00 p.m.

Learn how to use AI to validate AI in the realm of outside-in CX testing. The strategy for synthesizing “virtual bot testers” from linguistic and machine learning algorithms is closer than you think. This session covers machine learning algorithms for scoring bot response accuracy and maintaining proper conversational context; conversational scenario generation to stretch the limit of NLU models; configuring bots to execute regression testing in an agile, iterative delivery cycle; and customer usage examples.

Speaker:

, Vice President Product Management, Cyara

 

Wednesday, May 1

Track A: Managers

 

A301. Putting the Voice Assistants to the Test: Surprising Results in the Real World

Wednesday, May 1: 10:45 a.m. - 11:30 a.m.

Cognilytica recently tested voice assistants from Amazon, Google, Apple, Microsoft, and others and quickly realized just how un-intelligent these devices are. Many are not able to provide answers to very simple questions that require simple decision making or reasoning. These assistants provide inconsistent answers among platforms and can’t deal well with variable sentence structure and other issues. We identify where these voice assistants are failing, what sort of intelligence needs to be built into the devices to make them smarter and more useful, and the current pitfalls and opportunities for companies looking to build the next generation of voice assistant.

Speakers:

, Principal Analyst, Cognilytica

, Analyst, Cognilytica

 

A302. On Weakness Exploitation in Deep Neural Networks

Wednesday, May 1: 11:45 a.m. - 12:30 p.m.

During the past 10 years, deep neural networks have transformed the field of speech recognition. However, we are still discovering some peculiarities of these networks, such as how susceptible they are to attacks. By adding an extremely small but controlled noise that is imperceptible to humans, any regular speech or music sound could be modified to generate a transcript of your choice. We give some theoretical background on this vulnerability and provide real examples of modified audio.

Speaker:

, CEO, GoVivace Inc.

Don't Miss These Special Events