SpeechTEK 2020 - Track B: Voice Interface Experience Designers

Track B, targeted to voice interface experience designers, contains many customer case studies illustrating new and innovative applications using speech technologies, including speech in the home and in the car. Learn new insights for using speech technologies and how they could be used in your enterprise. Track B also contains discussions about the latest best practices and techniques for voice user experience design.

 

Monday, Apr 27

Track B: Designers

 

B101. Emerging Alien Mind

10:30 AM2020-04-272020-04-27

Monday, April 27: 10:30 a.m. - 11:15 a.m.

At its best, spoken languages connect two minds. Conscious and sentient humans have minds. Machines do not. In this session, Balentine describes a continuum of three conversational idioms— task-speech, thought-speech and social-speech—that together define a newly emerging “alien mind,” which is useful and productive, unlike many human minds. Intelligent conversation with an alien mind bridges the gap from today’s early (but mindless) successes to tomorrow’s ubiquitous interactions with agents, robots, kiosks, and disembodied entities.

Speaker:

, Chief Scientist Emeritus, Enterprise Integration Group

 

B102. Warming Up the Cold-Start Problem for Conversational AI

11:30 AM2020-04-272020-04-27

Monday, April 27: 11:30 a.m. - 12:15 p.m.

Defining the intents, dialogue responses, and variations can all increase the time to create a conversational assistant. Yet the content needed to power an assistant may already exists in places like public FAQ pages or knowledge management systems. This talk addresses how to leverage document understanding AI to extract valuable content and uses AI search to find and combine speech and visual interactions to quickly bootstrap an effective conversational interaction.

Speaker:

, Senior Offering Manager, IBM Watson

 

Keynote Lunch

12:15 PM2020-04-272020-04-27

Monday, April 27: 12:15 p.m. - 1:15 p.m.

 

B103. Ten Best Practices for High Quality Actions

01:15 PM2020-04-272020-04-27

Monday, April 27: 1:15 p.m. - 2:00 p.m.

High-quality Actions see high engagement with Google Assistant users. What makes them high quality? Drawing from real case studies, learn how to reduce development errors, enhance discovery of your Action, grow your userbase, and avoid mistakes along the way when building quality and engaging Actions.

Speaker:

, Conversation Designer, Google

 

B104. Expert Perspectives: GridSpace

02:15 PM2020-04-272020-04-27

Monday, April 27: 2:15 p.m. - 3:00 p.m.

Speech Analytics 101: Hours of Research in less than an hour
2:15 p.m. - 3:00 p.m.

e’re ready to help you evaluate the right speech analytics solution for YOUR organization. Join us to learn all you need to know about AI Speech Analytics from the bottom up. In this educational, interactive session we’ll help you get a better understanding of: What you need to know as you research your options, including critical questions to ask to avoid speech analytics failure; How to establish a high-level business case and identify applicable use cases; Proof of value (POV) before you buy. Join Roger Lee, aka Professor CXi, Vice President, Customer Success with Gridspace, as he shares valuable insights that will help make you an informed buyer. 

Speaker:

, Vice President, Customer Success, Gridspace

 

B105. Text Mining Millions of Recorded Customer Interactions for Actionable Insight

03:15 PM2020-04-272020-04-27

Monday, April 27: 3:15 p.m. - 4:00 p.m.

This talk reviews techniques for gaining insights through text mining of recorded customer interactions, including the following: scoping and sampling to ensure that a text mining solution is quick to market, economic to scale, and conformable to other processes or speech solutions in the enterprise; champion/challenger testing of multiple text mining platforms; understanding the benefits of sentiment analysis; and using unsupervised clustering to illustrate regularities in elements such as products, features, events, and responses.

Speaker:

, Senior Analyst, Salelytics

 

B106. Learn What Customers Really Want to Know

04:15 PM2020-04-272020-04-27

Monday, April 27: 4:15 p.m. - 5:00 p.m.

This talk discusses the benefits of performing a deep analysis of user turns in a conversation. As an example, we review a case where, through deep analysis of bot interactions, the U.S. Army learned that military personnel were frequently asking the bot about post-traumatic stress disorder (PTSD) symptoms and support. Consequently, the Defense Health Agency was informed that it needed to provide more anonymous self-service PTSD resources for soldiers.

Speaker:

, Chief Scientist, Verint-Next IT

 

Tuesday, Apr 28

Track B: Designers

 

B201. Ten Novel Applications of Speech Technology That Don’t Involve a Smart Speaker

10:45 AM2020-04-282020-04-28

Tuesday, April 28: 10:45 a.m. - 11:30 a.m.

With all of the hype around smart speakers, it’s easy to get the impression that there’s nothing else interesting happening in speech technology today. In this presentation, you hear about some innovative applications of speech technology which don’t involve smart speakers.

Speaker:

, CEO & Founder, Cobalt Speech & Language

 

B202. What Is Holding Voice Back & How Do We Fix It?

11:45 AM2020-04-282020-04-28

Tuesday, April 28: 11:45 a.m. - 12:30 p.m.

Smart speakers have the fastest adoption of new technology in human history—even faster than the smart phone. However, most brands have had very limited success with their Alexa Skills and similar voice experiences. Why is that? Join Tobias Dengel as he explores how some of the largest brands in the world, such as HBO, Fox, Regal Cinemas and Synchrony Bank, are shifting their thinking from voice to multimodal to truly take advantage of the platform.

Speaker:

, CEO, WillowTree

 

Keynote Lunch

12:30 PM2020-04-282020-04-28

Tuesday, April 28: 12:30 p.m. - 1:45 p.m.

 

B203. Using Emotion to Enhance Movie Recommendation Systems

01:45 PM2020-04-282020-04-28

Tuesday, April 28: 1:45 p.m. - 2:30 p.m.

We analyzed the emotional speech outputs of films produced by eight renowned directors, characterizing their work on an emotional spectrum. We then developed a rich recommendation system, which is more objective because it takes into account the script’s and director’s intentions. The emotional charge of a movie can influence our preferences as it gives us one more piece of information to decide what we want to see next.

Speaker:

, CEO, Behavioral Signals

 

B204. A Speech-Driven Data Collection System in the Food Supply Chain

02:45 PM2020-04-282020-04-28

Tuesday, April 28: 2:45 p.m. - 3:00 p.m.

We describe the process of designing, implementing, and using an intelligent agent for collecting field data in the food supply chain. We present our key learnings about the process, methods, and acceptance of speech technology. We describe the advantages and limitations of speech interaction methods, as well as the value of a human-in-the loop at key stages. Lastly, we make some predictions about the future of using speech in commercial and industrial use cases.

Speaker:

, Co-Founder & Chief Design Officer, AgVoice

 

B205. Predicting Health & Behavior Through Speech Patterns

04:15 PM2020-04-282020-04-28

Tuesday, April 28: 4:15 p.m. - 5:00 p.m.

Remote patient monitoring and screening overcome the shortage in mental health professionals and long waiting periods while significantly reducing medical costs. We use remote, objective, continuous patient monitoring and risk group screening based on an evidence-based vocal biomarker. The technology tracks changes in medical states by automatically monitoring patient speech patterns captured in everyday mobile interactions. Voice analysis extracts unique prosodic (non-content) speech features such as intonation, rhythm, pace, and emphasis, which glean universal voice parameters to address the physiological aspects of speech.

Speaker:

, CEO, VoiceSense

 

Wednesday, Apr 29

Track B: Designers

 

B301. The Revolution Is Coming: The Medium-Term Future of AI & ML

10:45 AM2020-04-292020-04-29

Wednesday, April 29: 10:45 a.m. - 11:30 a.m.

Current AI and machine learning (ML) technologies are beginning to change the way we build and innovate. However, the power of our current ML technologies is not fixed. This session explores where ML is at the moment and where it is heading in the next 10 years. How are its current use cases different from those we may see in the future? This session explores these themes and their impact on speech technology applications.

Speaker:

, Machine Learning Engineer, Speechmatics

 

B302. Expert Perspectives

11:45 AM2020-04-292020-04-29

Wednesday, April 29: 11:45 a.m. - 12:30 p.m.

Open to all full-conference attendees and Networking Pass holders. Check SpeechTEK.com for details or follow #SpeechTEK on Twitter.

Don't Miss These Special Events