April 9-11, 2018 | Renaissance Washington DC Hotel

Tuesday, April 10, 2018

Sunrise Discussions

SD201 - Patents, Speech, & Mobile Devices

8:00 a.m. - 8:45 a.m.
Dr. Jordan Cohen, Technologist and CEO, Spelamode Consulting
Steven M. Hoffberg, Of Counsel, Tully Rinckey, PLLC

The field of intellectual property (IP) is rapidly evolving with changes in the law, regulation, and the rise of automated assistants. This interactive discussion provides an overview of the evolving impact of the America Invents Act, including post-grant reviews, the Alice Supreme Court decision, and some recent judicial and administrative decisions. We address protections available for these technologies, IP risks in commercialization, challenges against blocking patents, and current best practices for protecting and defending against infringement assertions of speech and language technologies.

SD202 - Tuesday Knowledge Café

8:00 a.m. - 8:45 a.m.
Dr. Deborah Dahl, Principal, Conversational Technologies
David L Thomson, VP Speech Technology, CaptionCall
David Attwater, Senior Scientist, Enterprise Integration Group
Bruce Balentine, Chief Scientist Emeritus, Enterprise Integration Group
Michael McTear, Professor, Ulster University
Dan Miller, Lead Analyst-Founder, Opus Research, Inc.
Dr. William Meisel, President, TMA Associates

Participate in the interactive Knowledge Café, where you can share your speech technologies questions and challenges with colleagues and practitioners. Each table has a topic and one or more mentor: Natural Language, Debbie Dahl; Deep Neural Nets, Dave Thomson; User Interfaces, David Attwater and Bruce Balentine; Intelligent Assistants, Michael McTear and Dan Miller; and Speech Technology Business Strategies, Bill Meisel.

SD203 - Shred the Past: How to Create New Innovative Applications

8:00 a.m. - 8:45 a.m.
Dr. Moshe Yudkowsky, President, Disaggregate Corporation

There is a better way to think of new applications than just extending existing apps. We need to discard the ideas we’ve come to accept as normal over the past decades and think about our basic desires in order to formulate new goals. Once we’ve formulated goals, we need to generate and evaluate possible innovations. A few simple principles can help guide our quest.

SD204 - Association for Voice Interaction Design Meeting

8:00 a.m. - 8:45 a.m.
Crispin Reedy, Director, User Experience, Versay Solutions

The Association for Voice Interaction Design (AVIxD) brings together voice interaction design professionals into a group that crosses borders between companies, providers, customers, academia, and disciplines. AVIxD sponsors educational and networking opportunities such as annual workshops, Virtual Brown Bag seminars, the VUI Design Wiki, and the Voice Interaction Design journal. Members and nonmembers are welcome.

Keynote

Keynote: How Artificial Intelligence is changing the Contact Center

9:00 a.m. - 10:00 a.m.
Pasquale DeMaio, GM, Amazon Connect, AWS
Vikram Anbazhagan, Head, Product Management for Language Technologies, Amazon Machine Learning

Conversational interfaces, or intelligent “chatbots,” are defining entirely new categories of products and services. Deep learning technologies are empowering businesses to gain critical insights from unstructured text in transcribed calls, chats, and social media. Contact centers have been early adopters of these artificial intelligence (AI) technologies. Customers can seamlessly communicate across channels with automated assistants that delight customers, providing them faster and better service. AWS is making AI more accessible, available, and easy to use, enabling customers to accelerate their usage of AI in contact center workflows. Learn how AI-powered applications are providing business value by rethinking traditional applications and transforming customer experiences, and how to leverage these innovations in your workflows instantly.

Break in the Customer Solutions Expo

10:00 a.m. - 10:45 a.m.

Track A - Managers

The MANAGERS track discusses issues that managers face. Intelligent assistants are changing how users interact with businesses. Learn if your business needs them, how to build them, how to evaluate them, and how to deploy them. In this track you will also learn what is necessary to make your voice application work in a global environment. Presenters will also evaluate various approaches for using biometrics for fraud protection, an important topic given today’s concerns about hacking.

A201 - Training the Digital Virtual Assistant

10:45 a.m. - 11:30 a.m.
Brett Knight, Lead Digital Product Manager, USAA

Learn approaches for training digital systems based on capabilities that exist in the marketplace. Review some potential staffing models and processes that can be used to train the digital virtual agent. Become informed with the process of training so you can plan and prepare accordingly, ask the right questions to existing and potential technology partners, and create a strategy to leverage future advancements in training.

A202 - Case Study: Transformative Power of Conversational Customer Care

11:45 a.m. - 12:30 p.m.
Quinn Agen, Global Business Development, Omilia

Conversational speech in the contact center is not the next big thing—it’s already here! Learn how one of North America’s largest financial services companies moved from a high-performing touch-tone caller experience to a true artificial intelligence speech application that moves the conversation beyond the traditional, “How can I help you?” natural language application with directed speech. This session provides a real-world example of conversational speech, lessons learned, and actual contact center benefits.

Keynote Lunch - Voice & AI: Bringing Digital Transformation to the Masses

12:45 p.m. - 1:45 p.m.
sponsored by
Allyson Boudousquie, VP Market & Product Strategy, Concentrix

According to Forrester Research, nearly half of consumers already engage in automated conversations with intelligent assistants such as Alexa, Siri, and Cortana. Keyboards, screens, and remote controls are being replaced by more intuitive ways of interacting with devices—most notably, voice communications. This presentation provides fresh insights into how you can use voice and AI to deliver differentiated customer experiences, streamline operations, and take advantage of new revenue streams.

A203 - AI Listening to Understand, Not to Reply

2:00 p.m. - 2:45 p.m.
Frank Schneider, CEO, Speakeasy AI

The best customer experiences begin with understanding customer needs and tailoring an experience around those needs. Listening done right doesn’t create a word list, it creates a picture in your head of what the speaker is saying, describing, and what she/he needs. Authentic listening in AI requires understanding as the primary goal above all else, knowledge of what is unsaid, as well as what is said, and is ready to serve, but doesn’t impose “solutions”.

A204 - Multidimensional Localization: Talking, Texting, & Chatting in China

3:00 p.m. - 3:45 p.m.
Martha Senturia, Senior VUI Designer, Performance Technology Partners

The potential for accidental blunders skyrockets with the advent of new modes of customer interaction and the globalization of corporate-customer conversations. It’s not enough to speak the target language and translate; it’s essential to understand the cultural trends, the technological conditions, the channel-specific conversational conventions, and more in the user’s region. We review challenges with designing customer interactions in Mainland China ranging from telecom expenses, character limitations, keyboard logistics, emojis and text shorthands to visual.

Break in the Customer Solutions Expo

3:45 p.m. - 4:30 p.m.

A205 - Creating Humanlike Conversational UIs for Global Use

4:30 p.m. - 5:15 p.m.
Andy Peart, CMSO, Artificial Solutions

How do you build intelligent, humanlike, and effective UIs without spending millions of dollars on technology and recruiting an army of developers to build and maintain them? This presentation takes you behind the scenes to explore the challenges and issues in developing conversational applications that scale globally. We examine problems such as a lack of training data, scarce developer resources, and the complexities of multilingual applications. Learn how to rapidly build sophisticated conversational UIs without the need for specialist skills.

Track B - Developers

The DEVELOPERS track discusses the problems with using voice in the internet of things, including the use of far field-field microphones, controlling devices with voice commands, using voice for home assistants, and home automation. Learn about toolkits for developing advanced natural language understanding.

B201 - The Rise of Voice-Activated Assistants in the Workplace

10:45 a.m. - 11:30 a.m.
David Wiener, Chief Product Officer, Voicera

We’ve already mastered voice at home with the help of Alexa and Google Home, but what about in the enterprise? Voice-activated AI is the most important development to come to the workplace. Review the challenges of automatic speech recognition, natural language processing, and neural networks that make it difficult to create new kinds of voice-activated assistants for the workplace. Learn what the challenges are and how to overcome them.

B202 - Embedded Conversational AI + Avatar = Lifelike Chatbots

11:45 a.m. - 12:30 p.m.
Bernard Brafman, VP of Business Development, Sensory, Inc.

Conversational AI has become ubiquitous. Most implementations are in the cloud. However, they are not reliable should internet not be present and don’t work broadly across devices. New breakthroughs such as embedded natural language processing have opened the door to on-device voice control, voice assistants, and chatbots. Combining these embedded conversational technologies with 3D avatars, lip syncing animation, and high-quality text to speech has redefined the user experience in these applications.

Keynote Lunch - Voice & AI: Bringing Digital Transformation to the Masses

12:45 p.m. - 1:45 p.m.
sponsored by
Allyson Boudousquie, VP Market & Product Strategy, Concentrix

According to Forrester Research, nearly half of consumers already engage in automated conversations with intelligent assistants such as Alexa, Siri, and Cortana. Keyboards, screens, and remote controls are being replaced by more intuitive ways of interacting with devices—most notably, voice communications. This presentation provides fresh insights into how you can use voice and AI to deliver differentiated customer experiences, streamline operations, and take advantage of new revenue streams.

B203 - The Future of Far-Field Voice Capture Lies in the Fusion of Technologies

2:00 p.m. - 2:45 p.m.
Paul Neil, VP, Marketing & Product Management, XMOS

Capturing voice in the far-field (3–5 meters distance from a microphone) is a challenge for products with varying degrees of ambient noise and multiple users who may move during use. The best quality voice capture will require combinations of tightly integrated technologies that allow the microphone to differentiate between human speech and alternative noise sources and between individual people. This talk gives insights into approaches for delivering the best quality voice capture for smart environments and the challenges of far-field voice interfaces.

B204 - PANEL: Innovative Applications of Speech Technology From Academia

3:00 p.m. - 3:45 p.m.
Moderator: David L Thomson, VP Speech Technology, CaptionCall
Saurabh Sahu, VoiceVibes, Inc.
Carla Agurto, Computational Biology Center, T.J. Watson IBM Research Laboratory
Prof. John H.L. Hansen, CRSS (Center for Robust Speech Systems), Univ. of Texas at Dallas

Some of the most innovative speech analyst applications are created at universities as our panelists demonstrate: Presentations include: "Phonological markers of Oxytocin and MDMA ingestion" (detecting whether a person is high by voice analysis), "Multi-Channel Apollo Mission Speech Diarization" (speech recognition of Apollo flight recordings), and "An Affect Prediction Approach through Depression Severity Parameter Incorporation in Neural Networks" (predicting depression from audio + video signals)

Break in the Customer Solutions Expo

3:45 p.m. - 4:30 p.m.

B205 - Integrating Speech With Intelligent Services

4:30 p.m. - 5:15 p.m.
Dr. Deborah Dahl, Principal, Conversational Technologies

We are currently experiencing an explosion of sophisticated, cloud-based intelligent services. There are several well-known speech and natural language services, but other cognitive technologies are fast becoming available as cloud services. These include face recognition, emotion recognition, translation, and gesture recognition. This talk describes the available services, discusses how to integrate them with speech technologies, and reviews a number of ideas for new applications based on the combination of speech with other intelligent services.

Track C - User Experience Designers

The USER EXPERIENCE DESIGNERS track discusses techniques for developing and evaluating VUI designs.  Compare approaches for modeling conversations for chatbots. Discover how to detect and take advantage of user emotion. 

C201 - Modeling Conversation for Chatbots

10:45 a.m. - 11:30 a.m.
Michael McTear, Professor, Ulster University

The major intelligent agent platforms have converged on terminology involving intent identification and entity extraction, yet use different conversational models to describe behavior. Learn how these conversational models differ and which go beyond hard-coded conversation flows of “happy path” dialogues to model conversational phenomena such as follow-up questions, changes of topic, out-of-scope utterances, and other types of edge case. What are the desiderata for intelligent conversational agents?

C202 - Getting Chatbots Right–Assuring Quality Customer Experience

11:45 a.m. - 12:30 p.m.
Mike Monegan, Vice President Product Management, Cyara

There are many potential points for failure within both the chatbot experience as well as in the context transfer to and from chatbots. Learn how to assemble a strategy for a robust bot quality assurance program, including taming the complexities of multichannel bot test automation, using bot training data to train your test script development, testing with a checklist of failures and degraded experience, and coordinating developer and vendor unit testing with expansive outside-in CX assurance.

Keynote Lunch - Voice & AI: Bringing Digital Transformation to the Masses

12:45 p.m. - 1:45 p.m.
sponsored by
Allyson Boudousquie, VP Market & Product Strategy, Concentrix

According to Forrester Research, nearly half of consumers already engage in automated conversations with intelligent assistants such as Alexa, Siri, and Cortana. Keyboards, screens, and remote controls are being replaced by more intuitive ways of interacting with devices—most notably, voice communications. This presentation provides fresh insights into how you can use voice and AI to deliver differentiated customer experiences, streamline operations, and take advantage of new revenue streams.

C203 - How Can Digital Agents Make Use of User Emotion?

2:00 p.m. - 2:45 p.m.
Bruce Balentine, Chief Scientist Emeritus, Enterprise Integration Group

Machines that detect and classify human emotions should have a better read on a user’s mental state, while machines that display human emotions should be able to express more subtle machine states. A competent, bidirectional emotional interaction should therefore communicate with users more effectively. But how? This talk discusses Balentine’s research into emotional speech behavior and its effect on artificial conversation. The goal is to identify specific user-interface design methods that make positive use of emotion.

C204 - PANEL: Your Mood, Your Opinion, & Your Impact

3:00 p.m. - 3:45 p.m.
Moderator: Jeff Adams, CEO & Founder, Cobalt Speech & Language
Debra Bond Cancro, CEO, VoiceVibes, Inc.
Rene Arvin, Co-Founder, OmniBot GmbH
Mike Dwyer, VP of Research, CallMiner

Speech can reveal the speaker’s emotional condition, sentiment, or attitude on a specific topic and the impact of a public speaker on an audience. We review different ways to measure the human mental state and how it is projected through speech. We cover both text-based analysis and signal-based evaluation. We explain the differences between emotion detection, sentiment analysis, and persona classification as well as the technology behind each view into a speaker’s psyche and how this information can deliver business returns.

Break in the Customer Solutions Expo

3:45 p.m. - 4:30 p.m.

C205 - AVIxD Presents … What Makes a Customer Experience Designer

4:30 p.m. - 5:15 p.m.
Kristie Flenord, Senior Consultant, Human Factors, Concentrix

Finding the perfect fit for an open position is not always easy. We touch on suggested educational backgrounds, as well as sample job positions to help employers and interviewees alike fill a customer experience designer position. This is an interactive session where feedback from the audience is welcome, so come prepared to participate!

Track D - Technologists

The TECHNOLOGISTS track discusses the use of deep neural networks and how they have changed the ASR universe. Learn about currently available speech and NL APIs. Learn about new techniques for enhancing phone calls with visuals, multiple channels, and hybrid apps.

D201 - Hybrid App in Emerging Markets

10:45 a.m. - 11:30 a.m.
Dr Inderpal Mumick, Founder, CEO & Chairman, Kirusa

Learn how a service combining cloud delivery with integration into multiple mobile networks (Hybrid-OTT) provides enhanced call completion for both smart- and non-smartphone users in Africa, where smartphones have limited, though growing, penetration. The service enables users to receive missed-call and voicemail notifications, and respond to them in the form of texts, voice, or rich media over data. The service has reached adoption of more than 25% in some countries, with 2.5 billion calls every month, and more than 100 billion calls in the aggregate. Learn about the challenges Kirusa discovered and how we overcame them.

D202 - How Your IVR Can Become the Virtual Assistant Your Customers Demand

11:45 a.m. - 12:30 p.m.
Allyson Boudousquie, VP Market & Product Strategy, Concentrix

That trusty old touchtone application has been your traffic cop work horse for years. With the evolution of technology—and customer care expectations—it’s not enough to just contain calls anymore. Customer effort is a primary concern as the cost of consumer churn is taking a toll on businesses. Converting your IVR into a smart, conversational calling experience is easier than you think, and it doesn’t have to stop with the IVR channel. Let us show you how AI technology has evolved to holistically improve customer care for your contact center.

Keynote Lunch - Voice & AI: Bringing Digital Transformation to the Masses

12:45 p.m. - 1:45 p.m.
sponsored by
Allyson Boudousquie, VP Market & Product Strategy, Concentrix

According to Forrester Research, nearly half of consumers already engage in automated conversations with intelligent assistants such as Alexa, Siri, and Cortana. Keyboards, screens, and remote controls are being replaced by more intuitive ways of interacting with devices—most notably, voice communications. This presentation provides fresh insights into how you can use voice and AI to deliver differentiated customer experiences, streamline operations, and take advantage of new revenue streams.

D203 - Hearing, Seeing, & Doing: Cognitive Load in Interactive Multichannel Design

2:00 p.m. - 2:45 p.m.
Dawn Harpster, User Interface Designer, Customer Experience, Performance Technology Partners

Today’s multimodal interactive systems may require users to interact with traditional voice IVRs while simultaneously requiring them to interact with a text-based system (ITR) and possibly other technology as well. Designing a customer experience for more than one mode at a time presents unique challenges. Learn techniques for designing and testing a usable and user-friendly, multimodal interaction. We present findings from studies in which users were observed interacting with a text IVR, voice IVR, and other technology simultaneously.

D204 - Marry Visuals With Bots for Twice the Customer Experience

3:00 p.m. - 3:45 p.m.
Richard A. Davis, CTO, Radish Systems

We explore how bots can be effectively used by businesses to improve communications with their customers and reduce costs in the process. Phonebots often provide a poor user experience due to the limits of natural language processing (NLP). Adding a visual channel to an ordinary phone call takes the pressure off NLP to interpret what a user wants and brings the call up to the same data-rich level as other channels, thus encouraging the user to stay with the bot.

Break in the Customer Solutions Expo

3:45 p.m. - 4:30 p.m.

D205 - Speech-Driven, In-Queue Music and Messaging Slays the ‘On-Hold Problem’

4:30 p.m. - 5:15 p.m.
Marcus Graham, CEO, GM Voices

“On-hold” has been the black eye on the caller experience for years. The on-hold portion of the customer interaction has been largely the same—REALLY BAD. While the user waits in the call center queue, give the caller control of through a personalized call waiting experience which might include choosing licensed music (country, pop, rock, sports, etc.), presenting relevant messages based on their customer profile and history, provide an estimated wait time and offer to call the user back. Let’s make the “on-hold” time enjoyable and useful.

Reception

Networking Reception

5:30 p.m. - 7:00 p.m.

Mix and mingle with other attendees as well as speakers and our conference sponsors during this networking reception.

Co-Located With

Diamond Sponsor

Platinum Sponsors

Gold Sponsors

Corporate Sponsors

Monday Keynote Lunch Sponsor

Tuesday Keynote Lunch Sponsor

Media Sponsors