April 9-11, 2018 | Renaissance Washington DC Hotel


Wednesday, April 11, 2018

Sunrise Discussions

SD301 - Is It Time to Throw Our IVR in the Trash Can?

8:00 a.m. - 8:45 a.m.
David Attwater, Senior Scientist, Enterprise Integration Group

We look at how digital transformation is affecting call centers and how speech technology is helping to meet the new needs that are emerging. We cover key issues such as the following: Who still calls enterprises and why? Is self-service in the IVR going away and if not, why not? Why are greater and greater demands being placed on natural language in the IVR?

SD302 - Future Speech-Enabled Applications

8:00 a.m. - 8:45 a.m.
Darla Tucker, Director, Strategic Customer Solutions, Convergys

This is a brainstorming session for creative and useful new speech applications. Attendees can suggest all kinds of applications including speaking with IoT, speaking to your car, speaking to within VR apps, etc. The list of possible apps could be useful for planning future products. Come and help define the future of our industry.

SD303 - A Radically New Approach to Self-Service

8:00 a.m. - 8:45 a.m.
Tobias Goebel, Senior Director Emerging Technologies, Product Management and Marketing, Aspect Software, Inc.

Suppose we switch our mindset of new self-service projects from IT-centric to people-centric and include them in our workforce management, training, mentoring, and planning efforts? What if you treat them as digital employees that are only a new hire once, trained like their human counterparts, and thus perform constantly better over time, never get sick, work 24/7, never leave? The ROI becomes tremendous, and the resulting customer experience, superior. This talk explores what this transformation would mean for businesses and vendors alike.


Keynote: Humans in the Loop: NLP, Ontology, & Active Learning Applied for Text/Speech/Video

9:00 a.m. - 10:00 a.m.
Paco Nathan, Director, Learning Group, O’Reilly Media Amplify Partners

Recent work has applied advanced NLP, machine learning, graph algorithms, and ontology work to automate transcripts for audio/video content. Transcripts are parsed and indexed for search and recommendations and also automatically summarized for editorial use. In particular, a human-in-the-loop design pattern, based on Jupyter notebooks and other open sources, was applied to improve results—blending AI apps with human expertise. Those results in turn have been used to develop conversational interfaces (Alexa, etc.) for speech applications and intelligent assistants that make learning materials more accessible. This talk presents how AI apps in media help resolve important issues for both editors and consumers.

Break in the Customer Solutions Expo

10:00 a.m. - 10:45 a.m.

Track A - Managers

The MANAGERS track discusses issues that managers face. Intelligent assistants are changing how users interact with businesses. Learn if your business needs them, how to build them, how to evaluate them, and how to deploy them. In this track you will also learn what is necessary to make your voice application work in a global environment. Presenters will also evaluate various approaches for using biometrics for fraud protection, an important topic given today’s concerns about hacking.

A301 - Security & Fraud Prevention Using Biometrics

10:45 a.m. - 11:30 a.m.
Mia Puzo, Fraud Prevention SME, Nuance

Learn how biometrics uniquely balances security and convenience while bringing a new level of personalization to customer service. Discover ways in which AI can help create biometric watchlists that spot abnormal activities as they happen. Compare the features and benefits of various types of biometrics. Explore use cases for biometrics and real-world examples.

A302 - Are You Protecting Your Customers? Trends in the Authentication Transformation

11:45 a.m. - 12:30 p.m.
Jessica Baeten, Senior Manager, Business Operations, Strategic Initiatives, Verizon

From data breaches to the latest phishing scam, are you prepared to ensure your customers’ data is protected? Baeten reviews methods used to authenticate callers in contact centers. Learn about popular methods (PINs and passwords), current recommendations (voice biometrics, using multiple factors …), and what the future of authentication will look like (behavioral biometrics, predictive analytics …). Are you prepared to protect your customer?

Last Chance to Visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

Track B - Developers

The DEVELOPERS track discusses the problems with using voice in the internet of things, including the use of far field-field microphones, controlling devices with voice commands, using voice for home assistants, and home automation. Learn about toolkits for developing advanced natural language understanding.

B301 - Case Study: How Speech Technology is transforming brick-and-mortal retail

10:45 a.m. - 11:30 a.m.
Jesse Montgomery, Sr Speech Technologist, Theatro Labs

Learn how Theatro is transforming retail by equipping tens of thousands of hourly retail associates with a voice-controlled personal assistant, enabling them to communicate with one another, with store management and with backend systems. Our mobile virtual assistant solution provides a software suite of productivity and communication applications through a SaaS (Software as a Service) offering designed to optimize employee, sales, and operational performance.

B302 - Case Study: From Concept to Reality— Conversational Agents in the Real World

11:45 a.m. - 12:30 p.m.
Kenneth Conroy, VP, Data Science, Finn.ai

Conroy discusses how Finn.ai developed a production-ready, virtual banking, and conversational agent in Facebook Messenger. He deep dives into the decisions around building conversational agents in-house vs. working with existing vendors: what to consider, what to look for, and constraints on both ends. Learn what tooling is available, how to leverage natural language processing and big data in a functional capacity. Conroy makes recommendations about how to build such an environment.

Last Chance to Visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

Track C - User Experience Designers

The USER EXPERIENCE DESIGNERS track discusses techniques for developing and evaluating VUI designs.  Compare approaches for modeling conversations for chatbots. Discover how to detect and take advantage of user emotion. 

C301 - A Compass & a Life Preserver

10:45 a.m. - 11:30 a.m.
Brenda Gutierrez, Voice Interaction Designer, Digital Experience Delivery, USAA

As designers, we strive to discover the true problems and opportunities to solve, but often we don’t get the chance. Learn how we use techniques such as usability street intercepts, interviews, and mental modeling to seek out real user problems and rapidly learn directional clarity before starting to design solutions. Since early design research is grounded in real people, directional insights can also be used to alter business scope. Often, this results in a happier business partner and a satisfied customer.

C302 - Super-Human Augmented Reality? Yes, Please!

11:45 a.m. - 12:30 p.m.
Catelyn Orsini, Voice Interface Architect, Plantronics

Because the augmentation of reality will take many forms, it’s imperative that we integrate vision, touch, hearing, and speech into new sensing environments to create a truly natural, human experience. This session covers the relationship between technology and our human senses and offers insights to those working on AR projects or foundational, IoT frameworks that may help AR emerge as a human senses-led technology that people will actually welcome and enjoy.

Last Chance to Visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

Track D - Technologists

The TECHNOLOGISTS track discusses the use of deep neural networks and how they have changed the ASR universe. Learn about currently available speech and NL APIs. Learn about new techniques for enhancing phone calls with visuals, multiple channels, and hybrid apps.

D301 - Deep Learning Improves Speaker Verification Accuracy & Trait Extraction

10:45 a.m. - 11:30 a.m.
Pete Soufleris, VoiceVerified Inc.

Voice Biometrics Group has developed a noise modeling system based on deep learning algorithms. Soufleris describes the challenges of using prior noise filtration techniques and why newer, learning-based algorithms are able to perform better by cutting EER by 40% or more. He discusses how these techniques can be applied to gender classification, emotion detection, channel identification. Although highly technical in nature, this presentation is framed in common, everyday terms that business decision-makers can understand.

D302 - DNN Triplets Approach to Avoid Voice Biometric Spoofing

11:45 a.m. - 12:30 p.m.
Ciro Gracia, Voice Biometrics Research & Development Engineer, Verbio Technologies

Fraud countermeasures are one of the most relevant topics today. Many companies rely on voice biometrics to obtain natural and usable identity verification services. However, security breaches could be caused by recordings, audio manipulation, and even synthetic voices. Learn about latest development in biometric countermeasures systems and advanced spoofing attacks detection with the most recent deep learning training schemes.

Last Chance to Visit the Customer Solutions Expo

12:30 p.m. - 1:00 p.m.

SpeechTEK University - PostConference Workshops

SpeechTEK University courses provide in-depth, 3-hour seminars on compelling topics for speech technology and information technology professionals. Experienced speech technology practitioners teach each class in an intimate classroom setting to foster an educational and structured learning experience. If you are considering deploying a speech application, looking to increase your knowledgebase in one of the key areas below, or you simply need a speech technology refresher, attend a SpeechTEK University course. These courses are separately priced or are included with the purchase of a Platinum Pass.

STKU-4 - Build Your Own Cortana Skill

1:30 p.m. - 4:30 p.m.
Dorrene Brown, Program Manager, Cortana, Microsoft

The Cortana Skills Kit is a collection of APIs and technologies to build conversationally interesting experiences for Cortana. We run through the skills kit, build a skill together, and discuss best practices for conversational AI. Skills built in this session work across Windows 10, Android, iOS, and headless devices such as the Harman Kardon Invoke. Following this session, attendees will be able to intelligently discuss the skills kit with partners and will know where to go for support and help building skills in the future.

Participants are expected to bring a laptop for use during this workshop. Laptops will not be provided.

STKU-5 - Strategizing Customer Experiences for Speech

1:30 p.m. - 4:30 p.m.
Crispin Reedy, Director, User Experience, Versay Solutions

Does your organization need a strategy around voice? (Here’s a hint: The answer is, “Yes.”) What should that strategy be? Should you build an Alexa Skill or Google Action, or do something else entirely? This hands-on overview workshop looks at the problem through three different lenses: technology, design thinking, and marketing. First, tech. What’s involved in a voice enabled design, and what are companies doing today? Second, design thinking. Start with a user-centered view of the world—what does it look like when you throw voice into the mix? Third lens: marketing. What does search engine optimization look like in a world without “results” pages? How should your brand sound? This session gets you thinking about these questions and gives you the tools to make intelligent decisions about your organization and voice.

STKU-6 - Recent Deep Neural Net (DNN) Advances in Speech

1:30 p.m. - 4:30 p.m.
David L Thomson, VP Speech Technology, CaptionCall

Deep learning is systematically stomping out old algorithms in multiple fields, including speech recognition, by delivering breakthrough accuracy. Combined with big data, these methods are advancing research in finance, images, self-driving cars, advertising, spoken language, and games. This tutorial will cover new formulations of machine learning with a focus on DNNs for speech recognition and natural language processing. The session includes demonstrations and hands-on exercises. We recommend that participants bring a laptop. Attendees gain an understanding of DNN fundamentals, how they are used in acoustic and language modeling, the emergence of easy-to-use software tools, and where deep learning appears to be headed. Recent advances in deep learning, speech recognition fundamentals, how DNNs work, and why they are so accurate are all discussed.

STKU-7 - Build a Conversational Chatbot for Google Assistant

1:30 p.m. - 4:30 p.m.
Michael McTear, Professor, Ulster University

Google Assistant allows users to engage with information and services through devices such as Google Home, Android smartphones and iPhones, and, ultimately, a wide range of other smart devices. Creating a conversational chatbot brings certain challenges. This tutorial begins by exploring issues of conversation design and then outlines the various tools available, such as Actions on Google, which supports the building of apps for Google Assistant; API.AI (Dialogflow), a tool for defining the natural language understanding and dialogue components of a conversation; Google templates, an alternative way to quickly create a conversational app; and Actions Simulator, a tool for testing the chatbot. The main part of the tutorial provides a hands-on demonstration of the tools.

Participants are expected to bring a laptop for use during this workshop to develop a sample chatbot for Google Assistant. Laptops will not be provided.

Co-Located With

Diamond Sponsor

Platinum Sponsors

Gold Sponsors

Corporate Sponsors

Monday Keynote Lunch Sponsor

Tuesday Keynote Lunch Sponsor

Media Sponsors