Conference Program

At SpeechTEK 2019, join other executives, engineers, developers, and users and adopters of speech technologies to learn, share, and celebrate the trends and technologies shaping the future of speech technology, artificial intelligence, and customer interactions. This year's conference features tracks for Managers, Voice Interface Experience Designers, and Implementers.

Final Program [PDF]

 

Sunday, Apr 28

SpeechTEK University - Preconference Workshops

 

STKU-1. Natural Language Understanding

01:30 PM2019-04-282019-04-28

Sunday, April 28: 1:30 p.m. - 4:30 p.m.

Natural language understanding (along with speech recognition) is one of the foundational technologies underlying the Voice-First revolution. When it works well, the user experience is natural, frictionless, and efficient. When it doesn’t work well, the results can be frustrating and irritating. This session brings attendees up-to-date on current natural language understanding technology, explaining how it works and what’s going wrong when it doesn’t. We cover current technologies, including both traditional rule-based approaches, as well as machine learning technologies such as Deep Learning. We also review current proprietary natural language application tools such as the Amazon Alexa Skills Kit, Google Dialogflow, and Microsoft LUIS and discuss open source alternatives. Attendees come away from the session with an understanding of current natural language technology, its capabilities and future directions.

Speaker:

, Principal, Conversational Technologies

 

STKU-2. Principles of Conversational Design

01:30 PM2019-04-282019-04-28

Sunday, April 28: 1:30 p.m. - 4:30 p.m.

This interactive session is suitable for anyone seeking deeper experience and understanding of conversational design, including anyone working with home automation devices (e.g., Alexa), chat-bots, or conversational IVR. The session takes an in-depth look at the principles underlying conversational design with particular emphasis on human-computer conversation. It is suitable for designers or decision makers who would appreciate a deeper understanding of the different aspects of conversational design. Topics include the following: principle of speech recognition, including semantics, slots, and parsing; human psychology, including memory and learning; dialogue acts, grounding, discourse markers, and confirmation; language continuity, including anaphora; and narrative voice, persona, and social identity.

Speaker:

, Senior Scientist, Enterprise Integration Group

 

STKU-3. Cancelled - Learn How to Build Engaging Voice Experiences for Amazon Alexa

01:30 PM2019-04-282019-04-28

Sunday, April 28: 1:30 p.m. - 4:30 p.m.

Alexa is Amazon’s cloud-based voice service and the brain behind tens of millions of devices, including the Echo family of devices, FireTV, Fire Tablet, and third-party devices with Alexa built-in. You can build capabilities, or skills, that make Alexa smarter and make everyday tasks faster, easier, and more delightful for customers. Tens of thousands of developers have built skills using the Alexa Skills Kit (ASK), a collection of self-service APIs, tools, documentation, and code samples. This is a hands-on workshop where attendees learn how to create voice experiences for Amazon Alexa. We discuss voice design best practices, show how to leverage cloud services and APIs, discuss the latest Alexa features, and share code samples to get your project started. The final hour of the session is set aside for open hacking time, where you can get one-on-one support from an Amazon Alexa solutions architect. Developers who wish to follow along should create accounts on the following sites before attending: aws.amazon.com and developer.amazon.com.

Speaker:

, Solutions Architect, Amazon Alexa

Monday, Apr 29

Welcome & Opening Keynotes

 

Opening Keynote: Algorithms in, Humans out?

09:00 AM2019-04-292019-04-29

Monday, April 29: 9:00 a.m. - 10:00 a.m.

This presentation discusses the most important technological development of the coming years: artificial intelligence. What can companies expect? What should we do as more and more human skills are being taken over by these kind of systems, such as looking, listening, talking, reading and reasoning? And what will the future of artificial intelligence look like? What is definitely possible, and what isn't? How does this change the relationship with your customer?

The future is also brightened by the autonomous "digital butler" who gives you answers before you knew you need answers, and helps you before you knew you needed help. But does it only have advantages? Or not? What are the unintended, unforeseen disadvantages of this technological trend? Just like we should have done with the introduction of social media and the smartphone, shouldn't we be asking ourselves that one important question: what do we want the era of artificial intelligence to look like?

Speaker:

, Speaker, trendwatcher, author, Studio Overmorgen

 

Keynote - How Companies are Partnering with Conversational Machines

10:00 AM2019-04-292019-04-29

Monday, April 29: 10:00 a.m. - 10:15 a.m.

Many machine-human partnerships are starting to take shape in modern contact centers. Today, machines make it possible to query and classify vast numbers of conversational interactions. Soon, machines will become increasingly proactive, conversational, and helpful. In this session we will explore what real contact center tasks are best suited for machines today, and how agents and machines can work together most effectively.

Speaker:

, Co-Founder, Co-Head of Engineering, Gridspace

 

Monday, Apr 29

Track A: Managers

 

A101. A Comprehensive Guide to Technologies for Conversational Systems

10:30 AM2019-04-292019-04-29

Monday, April 29: 10:30 a.m. - 11:15 a.m.

Choosing the right platform for a particular application is critical. We focus on the capabilities of current natural language understanding and dialog management technologies. We present evaluation criteria and compare and contrast a number of popular platforms. We consider technical and non-technical features such as cost, vendor commitment, support, and cloud vs. on-premise operation. We review the state of the art for these technologies and conclude with comments on advanced dialog features that are currently being researched.

Speakers:

, Principal, Conversational Technologies

, CEO, Nu Echo Inc.

 

A102. Building Customer Service Digital Assistants

11:30 AM2019-04-292019-04-29

Monday, April 29: 11:30 a.m. - 12:15 p.m.

An automated digital assistant—whether a text chatbot or a voice-interactive part of your telephone customer service—can help customers receive quick results and help agents focus on more complex tasks. Today’s natural language technology can make the experience fast and pleasant—when done properly. Like any evolving technology, it can also be done poorly. This talk discusses how to achieve effective solutions using automated natural language interaction.

Speaker:

, President, TMA Associates

 

Keynote Lunch - The future is conversational, omnichannel, and in the cloud

12:15 PM2019-04-292019-04-29

Monday, April 29: 12:15 p.m. - 1:15 p.m.

Today, IVRs are treated as a containment strategy to avoid calls reaching contact center agents. They focus on operational efficiency instead of customer experience. No wonder most users hate IVRs — 60% of them try to bypass them as soon as possible! The irony is that focusing on a great customer experience is a more effective approach for operational efficiency and cost savings, while also delivering high customer satisfaction scores.

We believe the future of customer engagement is conversational because conversations are at the heart of great customer experiences. Customers will interact with systems by speaking or texting naturally rather than pressing keys on their phones or reciting pre-determined commands. Conversational interfaces will allow businesses to route and handle hundreds of customer issues that wouldn't normally fit in a touch-tone IVR menu or even a mobile app. Customers won’t need to learn how to use conversational interfaces because they can just interact with them naturally.

In this talk we will demonstrate how to build a conversational assistant, train it and deploy it to our IVR and web chatbot. We will address the biggest challenges such as handling speech recognition inaccuracies, error handling, omnichannel deployments, and conversation state tracking. We will cover the conversational UX best practices as well as how to give your intelligent assistant a unique voice and tone. After this talk you will be equipped to launch your IVR, chatbot, and Alexa skill with Twilio Autopilot.

Speaker:

, Director of Product and Engineering, Twilio AI

 

A103. Best Practices for Bootstrapping an NLU System With Generated Sentences

01:15 PM2019-04-292019-04-29

Monday, April 29: 1:15 p.m. - 2:00 p.m.

Good NLU accuracy requires a sizeable training corpus made of sentences that represent expected responses from real users. How are new chatbots developed when there is little or no training data available? We present best practices to generate a NLU training corpus to easily train a fairly robust NLU system that can be used in a customer-facing chatbot, which makes it possible to quickly start collecting real sentences from real users.

Speaker:

, CEO, Nu Echo Inc.

 

A104. Creating Socialbots with Human-Like Conversational Abilities

02:15 PM2019-04-292019-04-29

Monday, April 29: 2:15 p.m. - 3:00 p.m.

Human conversation is significantly more complex than what we see in automated systems. For the past two years, Amazon has sponsored the Alexa Prize competition with the goal of building a socialbot capable of sustaining a twenty minute conversation. This past year 8 teams competed and while the approaches shared some common characteristics, they included some novel (and interesting) ideas. This talk will review progress in conversational AI and describe techniques developed for socialbots. 

Speaker:

, Research Professor, School of Computer Science, Carnegie Mellon University

 

A105. Expert Perspectives: Interactive Media North America and TTEC

03:15 PM2019-04-292019-04-29

Monday, April 29: 3:15 p.m. - 4:00 p.m.

Learn more about the latest technological developments in speech technology from leading-industry companies in Expert Perspectives sessions. These sessions are open to all full-conference attendees and Networking Pass holders. Check back for session details or follow #SpeechTEK on Twitter for updates.

Give your chatbot the gift of voice
3:15 p.m. - 3:35 p.m.

Millions of businesses implement chatbots, with mixed success. Chatting online is useful, but often a voice conversation is better. Speaking is faster than typing, safer and much more natural, plus everyone can use a phone. In this session, we discuss how to add voice and telephony to your bot.

Speaker:

, President and CEO, Interactive Media North America

Utilizing AI in your Customer Service Channel Journey
3:40 p.m. - 4:00 p.m.

As part of overall AI strategies, organizations are struggling with how best to leverage AI to provide a seamless customer experience throughout all their channels.  In this session we discuss how to securely empower your Speech enabled IVR with AI and seamlessly extend that AI experience to an enriched associate interaction when needed.

Speaker:

, Executive Director of the Speech Solutions Professional Services, TTEC

 

A106. Marry Visuals With Bots for Twice the Customer Experience

04:15 PM2019-04-292019-04-29

Monday, April 29: 4:15 p.m. - 5:00 p.m.

Phonebots are especially useful when enhanced with visual information, such as an instantly viewed, tappable menu of options instead of a long sequence of questions or a spoken list of options. Adding a visual component—maps, photos, video snippets, menus, graphics, diagrams, short documents—to an ordinary phone call clarifies the users’ requests and can encourage customers to stay on the automated bot to reach a satisfactory resolution.

Speaker:

, Strategic Sales Consultant, Radish Systems

 

Monday, Apr 29

Track B: Voice Interface Experience Designers

 

B101. Challenges of Implementing Voice Control for Space Applications

10:30 AM2019-04-292019-04-29

Monday, April 29: 10:30 a.m. - 11:15 a.m.

This presentation provides the audience with an overview of the challenges for implementation of voice control in space applications that include the hardware, software, environment, and, more importantly, the astronaut. Past voice control applications in space are given. Learn how to apply key learnings from these applications to applications here on Earth.

Speaker:

, Human-Computer Technical Discipline Lead, Avionics Systems, NASA/Johnson Space Center

 

B102. Augmenting United Way’s Help Center

11:30 AM2019-04-292019-04-29

Monday, April 29: 11:30 a.m. - 12:15 p.m.

Come hear how United Way implemented a visual IVR with Amazon Connect to augment its existing call center. The goal is to help those in need 24/7, so that more people quickly find assistance. The visual IVR provides smartphone callers with emergency shelter locations in an easy-to-understand visual format. The visual IVR shortens calls, improves call containment through visual self-service, decreases switching to live staff, and reduces follow-up calls.

Speaker:

, 2-1-1 Manager, Mile High United Way

 

Keynote Lunch - The future is conversational, omnichannel, and in the cloud

12:15 PM2019-04-292019-04-29

Monday, April 29: 12:15 p.m. - 1:15 p.m.

Today, IVRs are treated as a containment strategy to avoid calls reaching contact center agents. They focus on operational efficiency instead of customer experience. No wonder most users hate IVRs — 60% of them try to bypass them as soon as possible! The irony is that focusing on a great customer experience is a more effective approach for operational efficiency and cost savings, while also delivering high customer satisfaction scores.

We believe the future of customer engagement is conversational because conversations are at the heart of great customer experiences. Customers will interact with systems by speaking or texting naturally rather than pressing keys on their phones or reciting pre-determined commands. Conversational interfaces will allow businesses to route and handle hundreds of customer issues that wouldn't normally fit in a touch-tone IVR menu or even a mobile app. Customers won’t need to learn how to use conversational interfaces because they can just interact with them naturally.

In this talk we will demonstrate how to build a conversational assistant, train it and deploy it to our IVR and web chatbot. We will address the biggest challenges such as handling speech recognition inaccuracies, error handling, omnichannel deployments, and conversation state tracking. We will cover the conversational UX best practices as well as how to give your intelligent assistant a unique voice and tone. After this talk you will be equipped to launch your IVR, chatbot, and Alexa skill with Twilio Autopilot.

Speaker:

, Director of Product and Engineering, Twilio AI

 

B103. ‘Hera,’ the Avatar for Pregnant Women

01:15 PM2019-04-292019-04-29

Monday, April 29: 1:15 p.m. - 2:00 p.m.

See video clips of this fascinating avatar as a virtual personal assistant for pregnant women. Learn how focus groups revealed what women really want from this application and the importance of the user-avatar relationship to the success of the project. Learn the strategy behind its design, including technical and user interface considerations. Discover which features, including trust, reliability, and visuality, are most important to real users.

Speaker:

, Head of Multidisiplinary studies, HIT Holon Institute of Technology. Israel

 

B104. Using Speech Technology to Understand Brain Injuries

02:15 PM2019-04-292019-04-29

Monday, April 29: 2:15 p.m. - 3:00 p.m.

Over the last few years, brain injuries have moved to the forefront of health issues and concerns, particularly highlighted by challenges in professional sports. With greater attention to the issue, the ability to diagnose injuries is evolving. Now, there is an opportunity to leverage speech technology to more immediately identify injuries. Working with MindSquare, AppTek is assisting in the diagnosis of concussions (or mild traumatic brain injuries) using a mobile device.

Speaker:

, CEO, MindSquare

 

B105. Working Voice Into the Newsroom Workflow

03:15 PM2019-04-292019-04-29

Monday, April 29: 3:15 p.m. - 4:00 p.m.

Newsrooms have complex workflows to produce content at highest journalistic standards—Gannett does this efficiently and at massive scale across its 100-plus news properties. Newsroom workflows changed from print, to web, to mobile device, and now to social. The voice revolution calls for another metamorphosis to the newsroom’s workflow. Learn how Goff transformed the newsroom workflow for voice with lessons learned from the web, mobile, and social revolutions.

Speaker:

, Technical Product Director, Gannett

 

B106. Expert Perspectives: Best practices for designing a voice bot

04:15 PM2019-04-292019-04-29

Monday, April 29: 4:15 p.m. - 5:00 p.m.

Bots have been around for a few years now, but most aren’t considered great user experiences. In this session, you'll learn about the right elements for building a bot that provides them using Dialogflow and Cloud Speech technologies. We will show you how to build a simple bot, as well as describe advanced techniques you can use to increase your completion rate.

Speaker:

, Technical Solutions Consultant, Google Cloud

 

Monday, Apr 29

Track C: Implementers

 

C101. PANEL: How AI Improves the Call Center

10:30 AM2019-04-292019-04-29

Monday, April 29: 10:30 a.m. - 11:15 a.m.

Learn how AI is used in a call center environment to train, assist, monitor, and advise human agents as they interact with customers as well as how to predict employee departures and prescribe targeted interventions. How a graphical representation of the client interaction assists the human agent is discussed, along with how a combination of words and non-verbal analysis can detect the emotional state of customers and agents and guide agents in-the-moment to adjust their behavior for improved outcomes.

Moderator:

, Department of Business Analytics & Information Systems, School of Management, University of San Francisco


Speakers:

, CEO, VoiceVibes, Inc.

, Co-Founder & CEO, RankMiner Predictive Voice Analytics

, Senior Analyst, Salelytics

, CTO & Co-Founder, Cogito

 

C102. The Distorted Crystal Ball and the Future of Ambient Assistance

11:30 AM2019-04-292019-04-29

Monday, April 29: 11:30 a.m. - 12:15 p.m.

AI. Voice. Big Data. We are standing at one of the most profound inflection points in the history of technology. More than just buzzwords, each of these topics contains the very real seeds of transformation and disruption. But where to begin? This talk explores the impact of China's 2030 AI initiatives. The staggering adoption of these emerging technologies at scale in China has uncovered key principles that the rest of the world can learn from today. Exploring these topics will provide both cautionary tales and a reliable road map for both short and long-term applications.

Speaker:

, Chief Creative Officer, RAIN

 

Keynote Lunch - The future is conversational, omnichannel, and in the cloud

12:15 PM2019-04-292019-04-29

Monday, April 29: 12:15 p.m. - 1:15 p.m.

Today, IVRs are treated as a containment strategy to avoid calls reaching contact center agents. They focus on operational efficiency instead of customer experience. No wonder most users hate IVRs — 60% of them try to bypass them as soon as possible! The irony is that focusing on a great customer experience is a more effective approach for operational efficiency and cost savings, while also delivering high customer satisfaction scores.

We believe the future of customer engagement is conversational because conversations are at the heart of great customer experiences. Customers will interact with systems by speaking or texting naturally rather than pressing keys on their phones or reciting pre-determined commands. Conversational interfaces will allow businesses to route and handle hundreds of customer issues that wouldn't normally fit in a touch-tone IVR menu or even a mobile app. Customers won’t need to learn how to use conversational interfaces because they can just interact with them naturally.

In this talk we will demonstrate how to build a conversational assistant, train it and deploy it to our IVR and web chatbot. We will address the biggest challenges such as handling speech recognition inaccuracies, error handling, omnichannel deployments, and conversation state tracking. We will cover the conversational UX best practices as well as how to give your intelligent assistant a unique voice and tone. After this talk you will be equipped to launch your IVR, chatbot, and Alexa skill with Twilio Autopilot.

Speaker:

, Director of Product and Engineering, Twilio AI

 

C103. PANEL: Using Biometrics & AI to Establish Trust

01:15 PM2019-04-292019-04-29

Monday, April 29: 1:15 p.m. - 2:00 p.m.

How can voice and behavior biometrics seamlessly verify that users are who they claim to be in real time? How can fraudsters be detected based on their voice prints, behavior anomalies, reconnaissance tactics, and robotic dialing techniques? Explore use cases and realworld examples for establishing security, identity, and trust between your organization and your customers. We share best practices and bloopers to help you have a successful voice biometrics deployment.

Moderator:

, CTO, ImmunityAnalytics


Speakers:

, Senior Manager, Commercial Security Strategy, Nuance Communications

, Solution Delivery Manager, LumenVox

, Director of Product Marketing, Pindrop

, Vice President, ID R&D

 

C104. New AI Techniques for call analytics and call routing

02:15 PM2019-04-292019-04-29

Monday, April 29: 2:15 p.m. - 3:00 p.m.

Behavioral speech analytics identifies typical speech patterns – prosodic content, the non-content parts of speech – intonation, pace, emphasis – that reflect common behavioral patterns. Behavioral speech analytics can provide a fairly strong prediction of the individual’s anticipated behaviors in various life situations. Improved call classification and routing is achieved by combining speech technology with robust natural language understanding and other Artificial Intelligence techniques.

Speakers:

, CEO, VoiceSense

, Chief Scientist of Speech Technology and AI, The MITRE Corporation

 

C105. Innovative Applications of Speech Technology From Academia

03:15 PM2019-04-292019-04-29

Monday, April 29: 3:15 p.m. - 4:00 p.m.

Descriptions and demonstrations of two of the most innovative applications created at universities selected from presentations at scientific conferences. Presentations include:

- Visual, Laughter, Applause and Spoken Expression Features for Predicting Engagement Within TED Talks—How camera angles, audience’s laughter and applause, and the presenter’s speech expressions help in automatic detection of user engagement. 

- Cues to Deception in Text and Speech—How machines detect deceptive behavior. We describe a corpus for researching deceptive natural language, features that are useful cues to deception, and the role of individual differences in deceptive behavior.

Moderator:

, VP Speech Technology, CaptionCall


Panelists:

, Research Fellow, Usher Institute, Edinburgh Medical School, the University of Edinburgh, UK

, Reader, Usher Institute of Population Health Sciences and Informatics, Edinburgh Medical School, the University of Edinburgh, UK

, NLP Scientist, Interactions LLC

 

C106. With One Voice: Unifying Conversational Interfaces

04:15 PM2019-04-292019-04-29

Monday, April 29: 4:15 p.m. - 5:00 p.m.

During this talk, conversation designers from Allstate share their experiences designing for a variety of interfaces with the goal of creating a unified experience for the audiences they serve. Get practical ideas for how your team can start sharing data, establishing common patterns, and iterating designs based on user research. Attendees also see a case study showing how designers working on separate voice and chat products find common ground when working on the same subject matter.

Speakers:

, Conversation Design Lead, Allstate

, Conversation Designer, Allstate

Tuesday, Apr 30

Sunrise Discussions

 

SD201. Patent Law Is a Moving Target

08:00 AM2019-04-302019-04-30

Tuesday, April 30: 8:00 a.m. - 8:45 a.m.

New definitions associated with the Alice decision and the America Invents Act continue to be adjusted by the patent office and affected by court decisions. This morning session discusses new guidance about the Alice definition of Abstract Ideas and impacts of recent court decisions on the America Invents Act.

Speakers:

, Technologist and CEO, Spelamode Consulting

, Of Counsel, Tully Rinckey, PLLC

 

SD202. Knowledge Café: Meet the Consultants

08:00 AM2019-04-302019-04-30

Tuesday, April 30: 8:00 a.m. - 8:45 a.m.

Participate in the interactive Knowledge Café, where you can share your speech technology questions and challenges with colleagues and practitioners.

Topics and Consultants:

 - User Interfaces: David Attwater, Senior Scientist, Enterprise Integration Group

 - User Interfaces: Bruce Balentine, Chief Scientist, Enterprise Integration Group

 - Natural Language: Deborah Dahl, Principal, Conversational Technologies

 - Speaker Identification: Judith Martowitz, President, J. Markowitz Consultants

 - Intelligent Assistants: Michael McTear, Professor, Ulster University

 - multimotal systems: Nava A Shaked, Head of Multidisiplinary Studies, HIT Holon Institute of 

   Technology

 - Speech Technology Business Strategies: William Meisel, President, TMA Associates

Speakers:

, Principal, Conversational Technologies

, Senior Scientist, Enterprise Integration Group

, Chief Scientist Emeritus, Enterprise Integration Group

, President, J. Markowitz Consultants

, Head of Multidisiplinary studies, HIT Holon Institute of Technology. Israel

, President, TMA Associates

 

SD203. ACIxD Conversational Design Wiki Workshop 1: Brainstorm Content & Organization

08:00 AM2019-04-302019-04-30

Tuesday, April 30: 8:00 a.m. - 8:45 a.m.

The ACIxD organization (formerly AVIxD) created a wiki to share best practices in VUI design, but the industry is fast outgrowing the wiki. VUI professionals now design all types of conversational interactions, and it’s time to update the wiki to reflect this. We invite all conversational designers to this interactive whiteboard session where we work to expand the wiki. Come prepared to participate and lend your expertise to brainstorm where to take the wiki next!

Speaker:

, Sr. Consultant, Human Factors Concentrix

Tuesday, Apr 30

Keynote

 

Keynote - Digital Transformation: Driving CX Excellence

09:00 AM2019-04-302019-04-30

Tuesday, April 30: 9:00 a.m. - 9:45 a.m.

89% of executive say digitization will disrupt their business this year. Yet fewer than 1/3 believe that their digital strategy is correct, and only 21% believe the right people are setting their strategy. Why the disconnect?

Using real-time case studies from global, best-in-class companies, Barton will illustrate how these companies are using digital transformation to enhance customer experience.

Hear how five components – CRM, Data & Analytics, Social Media Communities, Customer Engagement and Emerging Technologies – form an integrated framework for successful digital transformation. Learn how to assemble these components in bite-size chucks by following a long-term roadmap that focuses on critical people and process issues, as well as technology.

Speaker:

, President, ISM, Inc.

 

Keynote - Breaking barriers with an integrated software suite

09:45 AM2019-04-302019-04-30

Tuesday, April 30: 9:45 a.m. - 10:00 a.m.

This session will discuss the ways that an integrated software system can provide you deeper insights into your business and help you provide a better customer experience.

Speaker:

, Professor in Residence, Zoho Corporation

 

Tuesday, Apr 30

Track A: Managers

 

A201. Expert Perspectives: GridSpace

10:45 AM2019-04-302019-04-30

Tuesday, April 30: 10:45 a.m. - 11:30 a.m.

Learn more about the latest technological developments in speech technology from leading-industry companies in Expert Perspectives sessions. These sessions are open to all full-conference attendees and Networking Pass holders. Check back for session details or follow #SpeechTEK on Twitter for updates.

The Connected Agent Journey: AI and AHT
10:45 a.m. - 11:30 a.m.

Nobody likes to hold for a new customer service agent, so last year Gridspace began testing a real-time machine coach to help agents quickly find the right answers. Now the results are in. Come learn how Gridspace Relay reduces average handles times and callbacks in production.

Speakers:

, Co-Founder, Co-Head of Engineering, Gridspace

, Head of Sales and Marketing, Gridspace

 

A202. PANEL: Problem Solving in the Age of Microservices

11:45 AM2019-04-302019-04-30

Tuesday, April 30: 11:45 a.m. - 12:30 p.m.

When all your technology resides in-house or with single vendor, you can find all the data you need to monitor performance, resolve errors, and make improvements. However, if you rely on microservices from multiple vendors, without careful planning, you might not even notice errors. This talk focuses on strategies and possible solutions to solving problems in a multi-vendor/microservices environment.

Moderator:

, President, Disaggregate Corporation


Speakers:

, CTO & Technical Lead, TEN DIGIT Communications

, CEO, Telnyx

 

Keynote Lunch - The Intelligent Contact Center

12:30 PM2019-04-302019-04-30

Tuesday, April 30: 12:30 p.m. - 1:45 p.m.

AI can now help improve contact centers in ways that up until just a few years ago where not possible. Google Cloud AI enables anyone to tap into AI built on Google tech that up until recently has been exclusive to Google employees. This includes our pre-trained ready-to-use models, including speech recognition that is now twice as accurate for phone calls, WaveNet-based neural network speech synthesis, conversational NLU, and conversational analytics. Together with partners, Google is now bringing this technology to contact centers via Contact Center AI solutions. Companies with contact centers of all sizes can now automate conversational experiences, and improve performance of human agents.

Speaker:

, Product Manager, Google Cloud Al

 

A203. Delivering AI Directly Within the Telephony Fabric

01:45 PM2019-04-302019-04-30

Tuesday, April 30: 1:45 p.m. - 2:30 p.m.

The talk shows basic AI and ML architectures and discusses current AI limitations. Some environments such as voice networks require a different and unique AI approach to deliver value. We discuss the topic of ambient AI and how it differs from Siri or Alexa. Finally, we give the audience a few pointers about cloud-based tools that make AI accessible to any developer, while showing a demo. This presentation is a collaboration between Phone.com and Second Mind.

Speaker:

, EVP & CTO, Phone.com, Inc.

 

A204. Conversational AI in a Disconnected World

02:45 PM2019-04-302019-04-30

Tuesday, April 30: 2:45 p.m. - 3:30 p.m.

When disconnected or occasionally connected to the internet, delivering a conversational experience has its own unique requirements. How to deploy a large vocabulary speech recognition engine? How to update the content? In this session, we explore multiple options in disconnected and sometimes connected technologies, and demonstrate multiple capabilities from multiple vendors. Attendees come away with a better understanding of what is capable in a disconnected world and some of the architectures and technologies that can make this happen.

Speaker:

, Senior Creative Technologist, Ship Side Technologies, Virgin Voyages

 

A205. AI-Powered Customer Experience Testing for an AI-Powered World

04:15 PM2019-04-302019-04-30

Tuesday, April 30: 4:15 p.m. - 5:00 p.m.

Learn how to use AI to validate AI in the realm of outside-in CX testing. The strategy for synthesizing “virtual bot testers” from linguistic and machine learning algorithms is closer than you think. This session covers machine learning algorithms for scoring bot response accuracy and maintaining proper conversational context; conversational scenario generation to stretch the limit of NLU models; configuring bots to execute regression testing in an agile, iterative delivery cycle; and customer usage examples.

Speaker:

, Vice President Product Management, Cyara

 

Tuesday, Apr 30

Track B: Voice Interface Experience Designers

 

B201. Discoverability in Spoken User Interfaces

10:45 AM2019-04-302019-04-30

Tuesday, April 30: 10:45 a.m. - 11:30 a.m.

Non-trivial user interfaces—those that require multiple turns to accomplish complex tasks—benefit when user and machine adapt to each other. Champions of voice claim that speech uniquely exhibits this plasticity. But it doesn’t unless the interface is designed to be discoverable. Discoverability requires systemic characteristics including trust, user-initiated backup moves, rewards for experimentation, and internal transparency. This session discusses specific design techniques that allow and encourage user exploration with low risk and a likely early payoff.

Speaker:

, Chief Scientist Emeritus, Enterprise Integration Group

 

B202. Return of the User! Usability Principles for Designing Skills That Stick.

11:45 AM2019-04-302019-04-30

Tuesday, April 30: 11:45 a.m. - 12:30 p.m.

Consider the analogy of “bad first dates” and “failed engagements” to highlight with examples the common pitfalls designers must avoid to design voice experiences that people enjoy and, most importantly, come back to. We discuss why so many voice interactions don’t go beyond the “first date.” Using the intimate dating experience as a metaphor, Javalagi explores how the subtle conventions of human-human interactions help us outline some key, guiding principles for designing delightful and meaningful voice interactions.

Speaker:

, Lead, UX Research and Design, Witlingo

 

Keynote Lunch - The Intelligent Contact Center

12:30 PM2019-04-302019-04-30

Tuesday, April 30: 12:30 p.m. - 1:45 p.m.

AI can now help improve contact centers in ways that up until just a few years ago where not possible. Google Cloud AI enables anyone to tap into AI built on Google tech that up until recently has been exclusive to Google employees. This includes our pre-trained ready-to-use models, including speech recognition that is now twice as accurate for phone calls, WaveNet-based neural network speech synthesis, conversational NLU, and conversational analytics. Together with partners, Google is now bringing this technology to contact centers via Contact Center AI solutions. Companies with contact centers of all sizes can now automate conversational experiences, and improve performance of human agents.

Speaker:

, Product Manager, Google Cloud Al

 

B203. Writing for Listenability

01:45 PM2019-04-302019-04-30

Tuesday, April 30: 1:45 p.m. - 2:30 p.m.

Spoken English is not the same as written English. This session reviews some of the academic research on the differences between spoken and written English and discusses how research results might be applied when writing material that is intended to be spoken aloud for a voice-enabled interface. Also, do these principles apply to casual text conversations such as chatbots? How might these principles be factored into a “listenability index”?

Speaker:

, Director, User Experience, Versay Solutions

 

B204. Voice-Enable All Things, Cloud-Free!

02:45 PM2019-04-302019-04-30

Tuesday, April 30: 2:45 p.m. - 3:30 p.m.

Learn how to create a cloud-free voice UI for your next project, including low-power sound detection, wake word recognition, small vocabulary speech recognition, natural language understanding, and biometric authenticators. Learn about the various building blocks that go into engineering a voice-enabled device, such as sourcing the right integrated circuit and voice input system hardware, accessing SDKs, building command sets, training voice models, and more. See a live demonstration of an on-device personal assistant that functions totally free of an internet connection.

Speaker:

, VP of Business Development, Sensory, Inc.

 

B205. Speech Recognition in Challenging Conditions

04:15 PM2019-04-302019-04-30

Tuesday, April 30: 4:15 p.m. - 5:00 p.m.

What do we do when users must shout above the din in a noisy factory or vehicle, when we have to deal with accents different from our own, or when we are trying to recognize casual speech between humans? In these circumstances, most speech recognizers will break down and give poor results. We discuss strategies for mitigating the problems for such challenging conditions.

Speaker:

, CEO & Founder, Cobalt Speech & Language

 

Tuesday, Apr 30

Track C: Implementers

 

C201. What Is Anthropomorphism & Why Do You Care?

10:45 AM2019-04-302019-04-30

Tuesday, April 30: 10:45 a.m. - 11:30 a.m.

Treating objects like smart speakers, robots, and smart devices as human is called anthropomorphism. Some users may forget that some of these devices are not human and expect human-like responses and advice. This can lead to unfortunate situations with potential social and legal repercussions. Anthropomorphism can also lead to isolation and loss of human association. Designers need to understand potential social and ethical issues surrounding anthropomorphism and take steps to minimize these problems.

Speaker:

, President, J. Markowitz Consultants

 

C202. PANEL: The Ethics of ASR Lie Detection: What Could, and Should, We Determine?

11:45 AM2019-04-302019-04-30

Tuesday, April 30: 11:45 a.m. - 12:30 p.m.

With the advent of machine learning and neural nets and the proper amount of data, we can guess accurately ID, gender, language, maybe age, and more. What are the ethics involved in creating a biometric-based lie detector, or possibility a sexual-preference detector? Where should—and how do—we draw the line?

Moderator:

, Senior Creative Technologist, Ship Side Technologies, Virgin Voyages


Speakers:

, CEO, GoVivace Inc.

, Of Counsel, Tully Rinckey, PLLC

, CEO and Founder, Voice Biometrics Group

 

Keynote Lunch - The Intelligent Contact Center

12:30 PM2019-04-302019-04-30

Tuesday, April 30: 12:30 p.m. - 1:45 p.m.

AI can now help improve contact centers in ways that up until just a few years ago where not possible. Google Cloud AI enables anyone to tap into AI built on Google tech that up until recently has been exclusive to Google employees. This includes our pre-trained ready-to-use models, including speech recognition that is now twice as accurate for phone calls, WaveNet-based neural network speech synthesis, conversational NLU, and conversational analytics. Together with partners, Google is now bringing this technology to contact centers via Contact Center AI solutions. Companies with contact centers of all sizes can now automate conversational experiences, and improve performance of human agents.

Speaker:

, Product Manager, Google Cloud Al

 

C203. Will AI Replace Humans in Customer Engagement?

01:45 PM2019-04-302019-04-30

Tuesday, April 30: 1:45 p.m. - 2:30 p.m.

If you listen to the scaremongers, the future of the human race is at the mercy of AI. Are we destined to become a sluggish race ruled by robots, or will our own emotional intelligence prevail? This presentation examines the constraints of conversational AI, looks at the differences in skill sets between man and machine, and discusses why humans will always have a job when it comes to customer engagement.

Speaker:

, CMSO, Artificial Solutions

 

C204. Say the Right Thing: VUI Design Ethics

02:45 PM2019-04-302019-04-30

Tuesday, April 30: 2:45 p.m. - 3:30 p.m.

Just because we can build something, doesn’t mean we should. Voice is positioned at the forefront of technology, and as VUI designers, we are confronted with ethical decisions. This talk walks you through the kinds of ethical considerations to incorporate into your voice designs and presents tips on how to judge whether a design is ethical. Learn how to have the hard conversations with your clients and companies.

Speaker:

, Lead Designer, Grand Studio

 

C205. PANEL: Speech Technologies Inside the Enterprise

04:15 PM2019-04-302019-04-30

Tuesday, April 30: 4:15 p.m. - 5:00 p.m.

Voice is rapidly emerging as the main user interface for many apps and devices. Speech recognition and natural language understanding will change how knowledge workers interact with computers and applications, opening opportunities for innovation in human-computer interaction, including intelligent assistants in the meeting room and for team collaboration. Learn how speech adds value to enterprise applications? Discover key opportunities and challenges for speech-enabled enterprise applications. What use cases are early adopters interested in?

Moderator:

, Senior Analyst, Workforce Productivity and Compliance, 451 Research


Speakers:

, CEO, Apprente, Inc.

, SVP Product & Solutions Marketing, RingCentral

, Chief Marketing Officer, Voicea

, Orion's Head of Product, Orion

Wednesday, May 1

Sunrise Discussions

 

SD301. Creative Strategies for Choosing a Name for Your Voice Application

08:00 AM2019-05-012019-05-01

Wednesday, May 1: 8:00 a.m. - 8:45 a.m.

Learn the key selection criteria for choosing the perfect invocation name. Choosing a name for your voice app is a strategic and creative process, and Javalagi highlights three key perspectives: brand identity, platform capability, and usability. With real-world examples and design exercises, this discussion equips you with best practices for naming a voice application that will be successful in the real world.

Speaker:

, Lead, UX Research and Design, Witlingo

 

SD302. Handling Undesirable Audio in Speech Systems

08:00 AM2019-05-012019-05-01

Wednesday, May 1: 8:00 a.m. - 8:45 a.m.

Speech recognizers are usually not equipped to deal with poor audio quality. Poor audio quality, characteristics of which include packet loss, etc., degrades recognition accuracy significantly. However, one can address this problem in a way to make the user experience more human-like. We discuss the characteristics of poor audio, ways to automatically detect these factors, and, armed with this knowledge, how we can make our automated systems more intelligent, thus improving the user experience.

Speaker:

, Speech Scientist, Data Science Group, [24]7.ai

 

SD303. ACIxD Conversational Design Wiki Workshop 2: Assembling the Road Map

08:00 AM2019-05-012019-05-01

Wednesday, May 1: 8:00 a.m. - 8:45 a.m.

The ACIxD (formerly AVIxD) interactive whiteboard session continues for a second day. We will discuss the direction of the wiki, its organization, and brainstorm new sections to be added. As a work product, we want to produce a set of next steps to continue to support and maintain a wiki that addresses the needs of conversational interaction designers.

Speaker:

, Director, User Experience, Versay Solutions

Wednesday, May 1

Keynote

 

Just Like Talking to a Person: How to Get There From Here

09:00 AM2019-05-012019-05-01

Wednesday, May 1: 9:00 a.m. - 10:00 a.m.

Virtual assistants have been around for nearly 10 years, since Siri was introduced in 2010. Now is a good time to look at what they can currently do and to think about what they could do for us if only they were smarter. How close are today’s virtual assistants to human conversational abilities, and how much closer can they get? Is it important for future systems to just be able to do more things, or should they also be more emotional and sympathetic? How important is it for them to be able to socialize informally with people and have wide-ranging conversations? This talk reviews the state of the art of virtual assistants, goes over 10 important new capabilities, and discusses the technical challenges involved in improving their abilities. We also look at some interesting current academic research and talk about at how it could be applied to future systems and applications. The talk concludes with some ideas about how the industry can help advance the state of the art.

Speaker:

, Principal, Conversational Technologies

 

Wednesday, May 1

Track A: Managers

 

A301. Putting the Voice Assistants to the Test: Surprising Results in the Real World

10:45 AM2019-05-012019-05-01

Wednesday, May 1: 10:45 a.m. - 11:30 a.m.

Cognilytica recently tested voice assistants from Amazon, Google, Apple, Microsoft, and others and quickly realized just how un-intelligent these devices are. Many are not able to provide answers to very simple questions that require simple decision making or reasoning. These assistants provide inconsistent answers among platforms and can’t deal well with variable sentence structure and other issues. We identify where these voice assistants are failing, what sort of intelligence needs to be built into the devices to make them smarter and more useful, and the current pitfalls and opportunities for companies looking to build the next generation of voice assistant.

Speakers:

, Principal Analyst, Cognilytica

, Analyst, Cognilytica

 

A302. On Weakness Exploitation in Deep Neural Networks

11:45 AM2019-05-012019-05-01

Wednesday, May 1: 11:45 a.m. - 12:30 p.m.

During the past 10 years, deep neural networks have transformed the field of speech recognition. However, we are still discovering some peculiarities of these networks, such as how susceptible they are to attacks. By adding an extremely small but controlled noise that is imperceptible to humans, any regular speech or music sound could be modified to generate a transcript of your choice. We give some theoretical background on this vulnerability and provide real examples of modified audio.

Speaker:

, CEO, GoVivace Inc.

 

Wednesday, May 1

Track B: Voice Interface Experience Designers

 

B301. Conversational Interfaces in the Car

10:45 AM2019-05-012019-05-01

Wednesday, May 1: 10:45 a.m. - 11:30 a.m.

The conventional push-to-talk speech experience is being completely redesigned in an effort to achieve a conversational interface. With autonomous vehicles on the horizon, intelligent assistants can become multimodal and effectively leverage video as a presentation modality. Learn about the current intelligent assistants for the car, what to expect with vehicle integration, and how things will change with autonomous vehicles. Finally, we discuss the optimum speech experience for the driver and what’s required to achieve this optimum experience.

Speaker:

, Vice President of Voice Technology, Sirius XM

 

B302. From Screens to Scenes: Voice Control in the Digital Home

11:45 AM2019-05-012019-05-01

Wednesday, May 1: 11:45 a.m. - 12:30 p.m.

The X1 voice remote has revolutionized the TV viewing experience. Leveraging AI to transcribe and understand what users are saying, Xfinity uses direct voice controls to connect users to the content in which they are most interested. The digital home voice experience will take on concierge-like capabilities, launching features like Phone Finder, Find My Tile, and act like a search engine to connect users with more information about available services.

Speaker:

, Director, Digital Home Product Management, Comcast

 

Wednesday, May 1

Track C: Implementers

 

C301. The Engineering of Emotion

10:45 AM2019-05-012019-05-01

Wednesday, May 1: 10:45 a.m. - 11:30 a.m.

Not only what, but how, a virtual assistant speaks will determine its success. We need to create a believable illusion that a bot concerns itself with the user’s situation. We need to turn engineers, designers, and content writers into emotion-aware wordsmiths who deeply care about every word and every pause, what to emphasize, and how to respond empathically. This talk explores and demonstrates possibilities of a more personalized, contextual and likeable customer engagement by using affective computing technologies and emotions analytics.

Speaker:

, Principal Engineer, Technology Futures, Intuit and University of California, Irvine

 

C302. Unlocking the Puzzle of AI and Omni-Channel Integration

11:45 AM2019-05-012019-05-01

Wednesday, May 1: 11:45 a.m. - 12:30 p.m.

We will address AI integration methods and draw practical roadmaps for migrating to digital omni-channel architectures by leveraging existing investments in IVR, chatbots and backend database interactions. Specifically:

  • Incorporating Google + and Amazon Lex AI into existing IVR investments
  • Leveraging existing IVR business logic, flow and backend database interactions to create an AI-based chatbot in a fraction of normal time
  • Standardizing a digital omni-channel approach across voice, chat, SMS, mobile and Intelligent virtual assistant channels.
Speaker:

, Vice President, Speech-Soft Solutions, LLC

Wednesday, May 1

SpeechTEK University - Postconference Workshops

 

STKU-4. Evaluation, Testing Methodology, and Best Practices for Speech-Based Interaction Systems

01:30 PM2019-05-012019-05-01

Wednesday, May 1: 1:30 p.m. - 4:30 p.m.

Testing and evaluation processes are crucial to the success of any NLP conversational system, but testing IVR and multimodal systems presents unique challenges. Focusing on multimodal applications that involve speech and other modalities, we describe the multiple layers of testing and QA: engine quality, functional application, VUI, interfaces and infrastructure, load balancing, backup, and recovery. Learn how to set testing goals, targets, and success factors; specify and measure metrics; test and measure “soft” and “immeasurable” targets; test documentation in all stages; manage a testing project; and identify who should be on the testing team.

Speaker:

, Head of Multidisiplinary studies, HIT Holon Institute of Technology. Israel

 

STKU-5. [Cancelled] Build a Conversational Chatbot for Google Assistant Using Dialogflow

01:30 PM2019-05-012019-05-01

Wednesday, May 1: 1:30 p.m. - 4:30 p.m.

This practical, hands-on workshop introduces attendees to the concepts, methods, and issues involved in the design and development of conversational chatbots using Google’s Dialogflow tool. Following a brief introduction to chatbots and conversational interfaces, the course explores relevant technologies and tools. The main part of the workshop is devoted to hands-on design and development of some sample conversational chatbots. Bring your laptops to learn how to develop conversational chatbots.

 

STKU-6. Natural Language Application Development

01:30 PM2019-05-012019-05-01

Wednesday, May 1: 1:30 p.m. - 4:30 p.m.

This workshop provides an in-depth overview of the process for developing a natural language application with current tools such as the Alexa Skills Kit and Microsoft LUIS. We start with requirements and then discuss design considerations, such as when and how to use multimodality, how to decide what intents and entities to use (and what to do if you change your mind). We address using nested and composite entities and the effect of the design on the machine learning process. Some platforms have limitations on the number of entities allowed. All platforms have some limitations on their natural language understanding capabilities—we talk about work-arounds for both issues. Finally, we review important post-development considerations, including testing, versioning, and maintenance.

Speaker:

, Principal, Conversational Technologies

 

STKU-7. Identify Skills for the Far-Field, Voice-First Interface

01:30 PM2019-05-012019-05-01

Wednesday, May 1: 1:30 p.m. - 4:30 p.m.

What use cases lend themselves to delivering a great Alexa skill/ Google Assistant action? How does one go about identifying such use cases? This workshop begins by diving deep into several Alexa skills and Google Assistant actions to identify which ones deliver value and which ones fall short of the mark. Then we walk through the basic characteristics and principles that help us methodically assess why some skills/actions are a good fit for the Voice-First, Far-Field interface and why some are not. Finally, the presenters work through several exercises with the workshop participants and apply those characteristics and principles to systematically move from a general use case and pinpoint experiences that are best delivered through the Voice-First interface.

Speakers:

, Product Manager, Voice Platforms, Gannett

, Lead, UX Research and Design, Witlingo

, Voice User Interface Designer, Witlingo

Don't Miss These Special Events