SpeechTEK 2019 - Track A: Managers

Track A, generally targeted to managers, is devoted to conversational dialogues and personal assistants, how to design, build, and test them, and avoid the pitfalls and roadblocks of this new technology. It explains what you need to know to make informed decisions about building conversational agents.


Monday, Apr 29

Track A: Managers


A101. A Comprehensive Guide to Technologies for Conversational Systems

10:30 AM2019-04-292019-04-29

Monday, April 29: 10:30 a.m. - 11:15 a.m.

Choosing the right platform for a particular application is critical. We focus on the capabilities of current natural language understanding and dialog management technologies. We present evaluation criteria and compare and contrast a number of popular platforms. We consider technical and non-technical features such as cost, vendor commitment, support, and cloud vs. on-premise operation. We review the state of the art for these technologies and conclude with comments on advanced dialog features that are currently being researched.


, Principal, Conversational Technologies

, CEO, Nu Echo Inc.


A102. Building Customer Service Digital Assistants

11:30 AM2019-04-292019-04-29

Monday, April 29: 11:30 a.m. - 12:15 p.m.

An automated digital assistant—whether a text chatbot or a voice-interactive part of your telephone customer service—can help customers receive quick results and help agents focus on more complex tasks. Today’s natural language technology can make the experience fast and pleasant—when done properly. Like any evolving technology, it can also be done poorly. This talk discusses how to achieve effective solutions using automated natural language interaction.


, President, TMA Associates


Keynote Lunch - The future is conversational, omnichannel, and in the cloud

12:15 PM2019-04-292019-04-29

Monday, April 29: 12:15 p.m. - 1:15 p.m.

Today, IVRs are treated as a containment strategy to avoid calls reaching contact center agents. They focus on operational efficiency instead of customer experience. No wonder most users hate IVRs — 60% of them try to bypass them as soon as possible! The irony is that focusing on a great customer experience is a more effective approach for operational efficiency and cost savings, while also delivering high customer satisfaction scores.

We believe the future of customer engagement is conversational because conversations are at the heart of great customer experiences. Customers will interact with systems by speaking or texting naturally rather than pressing keys on their phones or reciting pre-determined commands. Conversational interfaces will allow businesses to route and handle hundreds of customer issues that wouldn't normally fit in a touch-tone IVR menu or even a mobile app. Customers won’t need to learn how to use conversational interfaces because they can just interact with them naturally.

In this talk we will demonstrate how to build a conversational assistant, train it and deploy it to our IVR and web chatbot. We will address the biggest challenges such as handling speech recognition inaccuracies, error handling, omnichannel deployments, and conversation state tracking. We will cover the conversational UX best practices as well as how to give your intelligent assistant a unique voice and tone. After this talk you will be equipped to launch your IVR, chatbot, and Alexa skill with Twilio Autopilot.


, Director of Product and Engineering, Twilio AI


A103. Best Practices for Bootstrapping an NLU System With Generated Sentences

01:15 PM2019-04-292019-04-29

Monday, April 29: 1:15 p.m. - 2:00 p.m.

Good NLU accuracy requires a sizeable training corpus made of sentences that represent expected responses from real users. How are new chatbots developed when there is little or no training data available? We present best practices to generate a NLU training corpus to easily train a fairly robust NLU system that can be used in a customer-facing chatbot, which makes it possible to quickly start collecting real sentences from real users.


, CEO, Nu Echo Inc.


A104. Creating Socialbots with Human-Like Conversational Abilities

02:15 PM2019-04-292019-04-29

Monday, April 29: 2:15 p.m. - 3:00 p.m.

Human conversation is significantly more complex than what we see in automated systems. For the past two years, Amazon has sponsored the Alexa Prize competition with the goal of building a socialbot capable of sustaining a twenty minute conversation. This past year 8 teams competed and while the approaches shared some common characteristics, they included some novel (and interesting) ideas. This talk will review progress in conversational AI and describe techniques developed for socialbots. 


, Research Professor, School of Computer Science, Carnegie Mellon University


A105. Expert Perspectives: Interactive Media North America and TTEC

03:15 PM2019-04-292019-04-29

Monday, April 29: 3:15 p.m. - 4:00 p.m.

Learn more about the latest technological developments in speech technology from leading-industry companies in Expert Perspectives sessions. These sessions are open to all full-conference attendees and Networking Pass holders. Check back for session details or follow #SpeechTEK on Twitter for updates.

Give your chatbot the gift of voice
3:15 p.m. - 3:35 p.m.

Millions of businesses implement chatbots, with mixed success. Chatting online is useful, but often a voice conversation is better. Speaking is faster than typing, safer and much more natural, plus everyone can use a phone. In this session, we discuss how to add voice and telephony to your bot.


, President and CEO, Interactive Media North America

Utilizing AI in your Customer Service Channel Journey
3:40 p.m. - 4:00 p.m.

As part of overall AI strategies, organizations are struggling with how best to leverage AI to provide a seamless customer experience throughout all their channels.  In this session we discuss how to securely empower your Speech enabled IVR with AI and seamlessly extend that AI experience to an enriched associate interaction when needed.


, Executive Director of the Speech Solutions Professional Services, TTEC


A106. Marry Visuals With Bots for Twice the Customer Experience

04:15 PM2019-04-292019-04-29

Monday, April 29: 4:15 p.m. - 5:00 p.m.

Phonebots are especially useful when enhanced with visual information, such as an instantly viewed, tappable menu of options instead of a long sequence of questions or a spoken list of options. Adding a visual component—maps, photos, video snippets, menus, graphics, diagrams, short documents—to an ordinary phone call clarifies the users’ requests and can encourage customers to stay on the automated bot to reach a satisfactory resolution.


, Strategic Sales Consultant, Radish Systems


Tuesday, Apr 30

Track A: Managers


A201. Expert Perspectives: GridSpace

10:45 AM2019-04-302019-04-30

Tuesday, April 30: 10:45 a.m. - 11:30 a.m.

Learn more about the latest technological developments in speech technology from leading-industry companies in Expert Perspectives sessions. These sessions are open to all full-conference attendees and Networking Pass holders. Check back for session details or follow #SpeechTEK on Twitter for updates.

The Connected Agent Journey: AI and AHT
10:45 a.m. - 11:30 a.m.

Nobody likes to hold for a new customer service agent, so last year Gridspace began testing a real-time machine coach to help agents quickly find the right answers. Now the results are in. Come learn how Gridspace Relay reduces average handles times and callbacks in production.


, Co-Founder, Co-Head of Engineering, Gridspace

, Head of Sales and Marketing, Gridspace


A202. PANEL: Problem Solving in the Age of Microservices

11:45 AM2019-04-302019-04-30

Tuesday, April 30: 11:45 a.m. - 12:30 p.m.

When all your technology resides in-house or with single vendor, you can find all the data you need to monitor performance, resolve errors, and make improvements. However, if you rely on microservices from multiple vendors, without careful planning, you might not even notice errors. This talk focuses on strategies and possible solutions to solving problems in a multi-vendor/microservices environment.


, President, Disaggregate Corporation


, CTO & Technical Lead, TEN DIGIT Communications

, CEO, Telnyx


Keynote Lunch - The Intelligent Contact Center

12:30 PM2019-04-302019-04-30

Tuesday, April 30: 12:30 p.m. - 1:45 p.m.

AI can now help improve contact centers in ways that up until just a few years ago where not possible. Google Cloud AI enables anyone to tap into AI built on Google tech that up until recently has been exclusive to Google employees. This includes our pre-trained ready-to-use models, including speech recognition that is now twice as accurate for phone calls, WaveNet-based neural network speech synthesis, conversational NLU, and conversational analytics. Together with partners, Google is now bringing this technology to contact centers via Contact Center AI solutions. Companies with contact centers of all sizes can now automate conversational experiences, and improve performance of human agents.


, Product Manager, Google Cloud Al


A203. Delivering AI Directly Within the Telephony Fabric

01:45 PM2019-04-302019-04-30

Tuesday, April 30: 1:45 p.m. - 2:30 p.m.

The talk shows basic AI and ML architectures and discusses current AI limitations. Some environments such as voice networks require a different and unique AI approach to deliver value. We discuss the topic of ambient AI and how it differs from Siri or Alexa. Finally, we give the audience a few pointers about cloud-based tools that make AI accessible to any developer, while showing a demo. This presentation is a collaboration between Phone.com and Second Mind.


, EVP & CTO, Phone.com, Inc.


A204. Conversational AI in a Disconnected World

02:45 PM2019-04-302019-04-30

Tuesday, April 30: 2:45 p.m. - 3:30 p.m.

When disconnected or occasionally connected to the internet, delivering a conversational experience has its own unique requirements. How to deploy a large vocabulary speech recognition engine? How to update the content? In this session, we explore multiple options in disconnected and sometimes connected technologies, and demonstrate multiple capabilities from multiple vendors. Attendees come away with a better understanding of what is capable in a disconnected world and some of the architectures and technologies that can make this happen.


, Senior Creative Technologist, Ship Side Technologies, Virgin Voyages


A205. AI-Powered Customer Experience Testing for an AI-Powered World

04:15 PM2019-04-302019-04-30

Tuesday, April 30: 4:15 p.m. - 5:00 p.m.

Learn how to use AI to validate AI in the realm of outside-in CX testing. The strategy for synthesizing “virtual bot testers” from linguistic and machine learning algorithms is closer than you think. This session covers machine learning algorithms for scoring bot response accuracy and maintaining proper conversational context; conversational scenario generation to stretch the limit of NLU models; configuring bots to execute regression testing in an agile, iterative delivery cycle; and customer usage examples.


, Vice President Product Management, Cyara


Wednesday, May 1

Track A: Managers


A301. Putting the Voice Assistants to the Test: Surprising Results in the Real World

10:45 AM2019-05-012019-05-01

Wednesday, May 1: 10:45 a.m. - 11:30 a.m.

Cognilytica recently tested voice assistants from Amazon, Google, Apple, Microsoft, and others and quickly realized just how un-intelligent these devices are. Many are not able to provide answers to very simple questions that require simple decision making or reasoning. These assistants provide inconsistent answers among platforms and can’t deal well with variable sentence structure and other issues. We identify where these voice assistants are failing, what sort of intelligence needs to be built into the devices to make them smarter and more useful, and the current pitfalls and opportunities for companies looking to build the next generation of voice assistant.


, Principal Analyst, Cognilytica

, Analyst, Cognilytica


A302. On Weakness Exploitation in Deep Neural Networks

11:45 AM2019-05-012019-05-01

Wednesday, May 1: 11:45 a.m. - 12:30 p.m.

During the past 10 years, deep neural networks have transformed the field of speech recognition. However, we are still discovering some peculiarities of these networks, such as how susceptible they are to attacks. By adding an extremely small but controlled noise that is imperceptible to humans, any regular speech or music sound could be modified to generate a transcript of your choice. We give some theoretical background on this vulnerability and provide real examples of modified audio.


, CEO, GoVivace Inc.

Don't Miss These Special Events