Blog

Q&A with Rana el Kaliouby, CEO and Co-Founder of Affectiva, Author of Girl Decoded

Hailey Melamut headshot

We’re kicking off a new Q&A series profiling some of the leading voices, disruptors and decision makers in AI. Today, we talk to Rana el Kaliouby, the CEO and Co-Founder of Affectiva and author of the new book Girl Decoded. We chatted about the most important issues and challenges facing AI adoption today, what we get wrong about AI, how Affectiva created a new category in the AI industry (Emotion AI) and what drove her to write Girl Decoded in the first place. Check it out!

AI is often talked about as changing the world – for better or worse. What are your views on that?

I’ve staked my career on the belief that AI will make us more human, not less, and that that can change the world for the better. Oftentimes, people think of AI as the tool that will make people more productive and efficient. But I believe that the true potential for AI lies in its ability to increase our humanity and our empathy – it’s not just about changing the way we do our jobs or interact with technology, but in strengthening the way we interact with one another.

Here’s where I see the potential. Technology today has a lot of IQ, but it’s severely lacking in EQ, or emotional intelligence. As a result, our interactions in the digital world can be transactional, ineffective, or even dehumanizing – and that’s a problem, especially as we increasingly interact virtually with coworkers, loved ones, teachers, doctors and brands.

I believe we can fix that if we build technology with artificial emotional intelligence, or Emotion AI. AI can replicate the many ways we communicate and share our emotions when we interact with one another face-to-face, ensuring that the qualities that make us human – empathy, emotion and genuine connection – do not get lost in cyberspace. That has implications not only for how we interact with each other in a digital world, but also for how brands meaningfully connect with consumers, how teachers engage students, how physicians provide longitudinal care for patients, and so much more.

You defined a new category in the AI industry. Tell us about that, and what the reception has been? 

Affectiva is the pioneer in artificial emotional intelligence, or Emotion AI: software that can understand nuanced human emotions and complex cognitive states.

Several years ago, we realized that if AI had emotional intelligence, it could interact with humans in the same way that people engage with one another. We also knew that this was an underserved area of AI, with applicability in industries ranging from automotive to mental health, to conversational interfaces and media analytics. We set out to define the space and seed the market – and Emotion AI was born! Today, there is a rich ecosystem of Emotion AI players, and it’s projected to become a multi-billion-dollar industry, recognized by the media, analysts and investor community.

I want to acknowledge that creating a category isn’t just about the technology; it’s about evangelizing the vision and storytelling around its promise. When Emotion AI was getting off the ground, we knew we had to spread the word, articulate the need and share the vision. This is where partnership – with networks, industry organizations, and PR agencies (like Walker Sands!) – becomes critical to help you connect with new audiences, grow your business, fuel recruitment, and build a brand. ​

What are the most important issues around AI today that we can’t ignore?

At the end of the day, AI systems that are designed to engage with humans will have a lot of data and “know” a lot about the people they interact with. That comes with a lot of responsibility, and crucial questions that we must address – namely, how do we ensure the ethical development and deployment of AI?

One of the most pressing issues is the potential for bias in AI. Unfortunately, over the last few years we’ve seen many instances in which AI has been biased against minority groups. This often stems from issues in how the data is collected and how AI algorithms are trained and validated, as well as decisions to deploy AI for use cases that many would consider unethical (for example, in lie detection or law enforcement). Data privacy, opt-in and consent are other crucial considerations. To build trust in the technology, there needs to be transparency from AI companies to their consumers. People need to understand what data the AI is collecting, how it is being collected, where it is being stored, how it is being used and by whom.

While we’ve started to see some consensus on ethical principles for AI, the next step needs to be thoughtful regulation to enforce and uphold those ethical standards. As AI industry leaders, we can’t wait or rely on regulators to make this happen – we need to be partners in providing the necessary expertise that will drive that movement forward.

What are the most meaningful ways AI can have an impact?

I’m really inspired by the applications of AI in healthcare and mental health. When I was pursuing my post-doc at the MIT Media Lab, the very first application that we explored for Emotion AI was for autism. We built smart glasses to help kids on the autism spectrum learn important social, emotional and cognitive skills. Today, Affectiva works with a company, Brain Power, who’s continuing this work, and I am incredibly proud to partner with them. Outside of that, there are several other physicians and healthcare researchers applying Emotion AI to assist with suicide prevention, facial palsy/facial reconstructive surgery and early diagnosis of Parkinson’s disease. The potential impact of AI in healthcare makes me optimistic about our future with the technology.

Another impactful use case that we focus on at Affectiva is applying Emotion AI to the automotive industry. With the ability to detect signs of dangerous driving behavior, such as drowsiness and distraction, Emotion AI can help make our roads safer – something I can definitely get behind as the mom of a teen and a pre-teen!

In general, when I think about my kids’ future with Emotion AI – across a myriad of use cases – I envision a world where they experience more empathy in their digital lives, ultimately transcending to their interactions in the real world too. To me, that potential for impact is extremely powerful.

You recently wrote a book. Tell us a bit about it and why you wrote it? 

My memoir, Girl Decodedwas published by Penguin Random House in April 2020. The book follows my journey from “nice Egyptian girl” growing up in the Middle East, to scientist and entrepreneur in the U.S., as I’ve followed my calling to humanize technology with Emotion AI.

When I first set out to write Girl Decoded, my goal was to evangelize human-centric technology and advance the AI industry. But as I reflected and went through the writing process, I realized that my professional mission was so closely tied to my own personal journey, and that I had a more universal story to share about perseverance and embracing your emotions.

My background is unique, and at times it’s still hard for me to believe that a “nice, Egyptian girl” can do all of this – I’m a single mom in the U.S. with two kids, while being CEO of an AI startup in a field that is overwhelmingly white and male. Sometimes I still hear the Debbie-Downer voice in my head. But I have learned to reframe the message; it is now my advocate, not my adversary, challenging me to move forward out of my comfort zone.

I hope that by sharing my story, it might help other people find their own voice and forge their own path – even if it’s different from what others expect, or even a departure from what you expect of yourself.

Related

Share This

Read Next

Want to know more? Let’s talk.