Track C, targeted to implementers, contains several panel discussions offering many alternative viewpoints for critical issues including these: improving support of call center agents; detecting fraudsters using voice biometrics; extracting business knowledge from voice and text data using analytics; using speech technologies within the enterprise; and implementing omni channel. Track C also offers several sessions dealing with the difficult issues of ethics and speech technologies.
Monday, April 29: 10:30 a.m. - 11:15 a.m.
Learn how AI is used in a call center environment to train, assist, monitor, and advise human agents as they interact with customers as well as how to predict employee departures and prescribe targeted interventions. How a graphical representation of the client interaction assists the human agent is discussed, along with how a combination of words and non-verbal analysis can detect the emotional state of customers and agents and guide agents in-the-moment to adjust their behavior for improved outcomes.
Vijay Mehrotra, Department of Business Analytics & Information Systems, School of Management, University of San Francisco
Debra Bond Cancro, CEO, VoiceVibes, Inc.
Preston Faykus, Co-Founder & CEO, RankMiner Predictive Voice Analytics
Dan Coyer, Senior Analyst, Salelytics
Ali Azarbayejani, CTO & Co-Founder, Cogito
Monday, April 29: 11:30 a.m. - 12:15 p.m.
AI. Voice. Big Data. We are standing at one of the most profound inflection points in the history of technology. More than just buzzwords, each of these topics contains the very real seeds of transformation and disruption. But where to begin? This talk explores the impact of China's 2030 AI initiatives. The staggering adoption of these emerging technologies at scale in China has uncovered key principles that the rest of the world can learn from today. Exploring these topics will provide both cautionary tales and a reliable road map for both short and long-term applications.
Will Hall, Chief Creative Officer, RAIN
Monday, April 29: 12:15 p.m. - 1:15 p.m.
Today, IVRs are treated as a containment strategy to avoid calls reaching contact center agents. They focus on operational efficiency instead of customer experience. No wonder most users hate IVRs — 60% of them try to bypass them as soon as possible! The irony is that focusing on a great customer experience is a more effective approach for operational efficiency and cost savings, while also delivering high customer satisfaction scores.
We believe the future of customer engagement is conversational because conversations are at the heart of great customer experiences. Customers will interact with systems by speaking or texting naturally rather than pressing keys on their phones or reciting pre-determined commands. Conversational interfaces will allow businesses to route and handle hundreds of customer issues that wouldn't normally fit in a touch-tone IVR menu or even a mobile app. Customers won’t need to learn how to use conversational interfaces because they can just interact with them naturally.
In this talk we will demonstrate how to build a conversational assistant, train it and deploy it to our IVR and web chatbot. We will address the biggest challenges such as handling speech recognition inaccuracies, error handling, omnichannel deployments, and conversation state tracking. We will cover the conversational UX best practices as well as how to give your intelligent assistant a unique voice and tone. After this talk you will be equipped to launch your IVR, chatbot, and Alexa skill with Twilio Autopilot.
Nico Acosta, Director of Product and Engineering, Twilio AI
Monday, April 29: 1:15 p.m. - 2:00 p.m.
How can voice and behavior biometrics seamlessly verify that users are who they claim to be in real time? How can fraudsters be detected based on their voice prints, behavior anomalies, reconnaissance tactics, and robotic dialing techniques? Explore use cases and realworld examples for establishing security, identity, and trust between your organization and your customers. We share best practices and bloopers to help you have a successful voice biometrics deployment.
Phil Shinn, CTO, ImmunityAnalytics
Roanne Levitt, Senior Manager, Commercial Security Strategy, Nuance Communications
Roy Bentley, Solution Delivery Manager, LumenVox
Ben Cunningham, Director of Product Marketing, Pindrop
John Amein, Vice President, ID R&D
Monday, April 29: 2:15 p.m. - 3:00 p.m.
Behavioral speech analytics identifies typical speech patterns – prosodic content, the non-content parts of speech – intonation, pace, emphasis – that reflect common behavioral patterns. Behavioral speech analytics can provide a fairly strong prediction of the individual’s anticipated behaviors in various life situations. Improved call classification and routing is achieved by combining speech technology with robust natural language understanding and other Artificial Intelligence techniques.
Yoav Degani, CEO, VoiceSense
Qian Hu, Chief Scientist of Speech Technology and AI, The MITRE Corporation
Monday, April 29: 3:15 p.m. - 4:00 p.m.
Descriptions and demonstrations of two of the most innovative applications created at universities selected from presentations at scientific conferences. Presentations include:
- Visual, Laughter, Applause and Spoken Expression Features for Predicting Engagement Within TED Talks—How camera angles, audience’s laughter and applause, and the presenter’s speech expressions help in automatic detection of user engagement.
- Cues to Deception in Text and Speech—How machines detect deceptive behavior. We describe a corpus for researching deceptive natural language, features that are useful cues to deception, and the role of individual differences in deceptive behavior.
David L Thomson, VP Speech Technology, CaptionCall
Fasih Haider, Research Fellow, Usher Institute of Population Health Sciences and Informatics, Edinburgh Medical School, the University of Edinburgh, UK
Saturnino Luz, Reader, Usher Institute of Population Health Sciences and Informatics, Edinburgh Medical School, the University of Edinburgh, UK
Yocheved Levitan, NLP Scientist, Interactions LLC
Monday, April 29: 4:15 p.m. - 5:00 p.m.
During this talk, conversation designers from Allstate share their experiences designing for a variety of interfaces with the goal of creating a unified experience for the audiences they serve. Get practical ideas for how your team can start sharing data, establishing common patterns, and iterating designs based on user research. Attendees also see a case study showing how designers working on separate voice and chat products find common ground when working on the same subject matter.
Michael Metts, Conversation Design Lead, Allstate
Katie Lower, Conversation Designer, Allstate
Tuesday, April 30: 10:45 a.m. - 11:30 a.m.
Treating objects like smart speakers, robots, and smart devices as human is called anthropomorphism. Some users may forget that some of these devices are not human and expect human-like responses and advice. This can lead to unfortunate situations with potential social and legal repercussions. Anthropomorphism can also lead to isolation and loss of human association. Designers need to understand potential social and ethical issues surrounding anthropomorphism and take steps to minimize these problems.
Judith Markowitz, President, J. Markowitz Consultants
Tuesday, April 30: 11:45 a.m. - 12:30 p.m.
With the advent of machine learning and neural nets and the proper amount of data, we can guess accurately ID, gender, language, maybe age, and more. What are the ethics involved in creating a biometric-based lie detector, or possibility a sexual-preference detector? Where should—and how do—we draw the line?
Brian Garr, Senior Creative Technologist, Ship Side Technologies, Virgin Voyages
Nagendra Goel, CEO, GoVivace Inc.
Steven M. Hoffberg, Of Counsel, Tully Rinckey, PLLC
Peter Soufleris, CEO and Founder, Voice Biometrics Group
Tuesday, April 30: 12:30 p.m. - 1:45 p.m.
AI can now help improve contact centers in ways that up until just a few years ago where not possible. Google Cloud AI enables anyone to tap into AI built on Google tech that up until recently has been exclusive to Google employees. This includes our pre-trained ready-to-use models, including speech recognition that is now twice as accurate for phone calls, WaveNet-based neural network speech synthesis, conversational NLU, and conversational analytics. Together with partners, Google is now bringing this technology to contact centers via Contact Center AI solutions. Companies with contact centers of all sizes can now automate conversational experiences, and improve performance of human agents.
Dan Aharon, Product Manager, Google Cloud Al
Tuesday, April 30: 1:45 p.m. - 2:30 p.m.
If you listen to the scaremongers, the future of the human race is at the mercy of AI. Are we destined to become a sluggish race ruled by robots, or will our own emotional intelligence prevail? This presentation examines the constraints of conversational AI, looks at the differences in skill sets between man and machine, and discusses why humans will always have a job when it comes to customer engagement.
Andy Peart, CMSO, Artificial Solutions
Tuesday, April 30: 2:45 p.m. - 3:30 p.m.
Just because we can build something, doesn’t mean we should. Voice is positioned at the forefront of technology, and as VUI designers, we are confronted with ethical decisions. This talk walks you through the kinds of ethical considerations to incorporate into your voice designs and presents tips on how to judge whether a design is ethical. Learn how to have the hard conversations with your clients and companies.
Diana Deibel, Lead Designer, Grand Studio
Tuesday, April 30: 4:15 p.m. - 5:00 p.m.
Voice is rapidly emerging as the main user interface for many apps and devices. Speech recognition and natural language understanding will change how knowledge workers interact with computers and applications, opening opportunities for innovation in human-computer interaction, including intelligent assistants in the meeting room and for team collaboration. Learn how speech adds value to enterprise applications? Discover key opportunities and challenges for speech-enabled enterprise applications. What use cases are early adopters interested in?
Raul Castanon, Senior Analyst, Workforce Productivity and Compliance, 451 Research
Itamar Arel, CEO, Apprente, Inc.
Amritesh Chaudhuri, SVP Product & Solutions Marketing, RingCentral
Cory Treffiletti, Chief Marketing Officer, Voicea
Ellen Juhlin, Orion's Head of Product, Orion
Wednesday, May 1: 10:45 a.m. - 11:30 a.m.
Not only what, but how, a virtual assistant speaks will determine its success. We need to create a believable illusion that a bot concerns itself with the user’s situation. We need to turn engineers, designers, and content writers into emotion-aware wordsmiths who deeply care about every word and every pause, what to emphasize, and how to respond empathically. This talk explores and demonstrates possibilities of a more personalized, contextual and likeable customer engagement by using affective computing technologies and emotions analytics.
Wolf Paulus, Principal Engineer, Technology Futures, Intuit and University of California, Irvine
Wednesday, May 1: 11:45 a.m. - 12:30 p.m.
We will address AI integration methods and draw practical roadmaps for migrating to digital omni-channel architectures by leveraging existing investments in IVR, chatbots and backend database interactions. Specifically:
Greg Stack, Vice President, Speech-Soft Solutions, LLC