Designing Intelligent Content Experiences with AI Summaries & Audio

Client

RapidSOS

project type

Generative AI,
UI/UX Design

Team

Design, Development, PM

Overview

Developed an AI-powered multimedia content experience across text and audio to increase accessibility and drive engagement.

The challenge

The challenge

Increasing engagement by leveraging large language models and AI integrations

Increasing engagement by leveraging large language models and AI integrations

RapidSOS is an intelligent safety platform that connects real-time emergency data from devices like smartphones, vehicles, and IoT systems to emergency response services and personnel. With a robust website team launching new initiatives weekly, I was tasked with designing and building out a new content experience for their resource hubs to increase user engagement, accessibility, and align their AI-first business approach.

RapidSOS is an intelligent safety platform that connects real-time emergency data from devices like smartphones, vehicles, and IoT systems to emergency response services and personnel. With a robust website team launching new initiatives weekly, I was tasked with designing and building out a new content experience for their resource hubs to increase user engagement, accessibility, and align their AI-first business approach.

The challenge

Analyzing Microsoft Clarity results to identify user interactions and behavior to inform early ideation

Through user research using tools such as Hotjar and Microsoft Clarity, I identified that over 70% of user engagement dropped off after-the-fold content, with over 92% of engagement dropping off within the first four screenfuls of content. With this in mind, I wanted to design multiple avenues for users to engage with content, each with low-interaction cost. With the marketing team wanting to establish a more AI-first business position, this led to the idea of intelligent summaries and a text-to-speech feature.

My Work

Leveraging APIs and AI-powered voice synthesis to create a multimodal user experience

The key integrations involved in this project included connecting OpenAI’s API to our content management system to generate the summaries, as well as ElevenLabs, a generative AI audio platform. Using custom PHP and JavaScript, I was able to link user interactions on the page (button click) to trigger API calls that would then output on our site in a pop-up module. I also created custom fields in the back end of the site that would dynamically detect and read content on the page. An interesting challenge presented by this process is how to detect different types of content on a given page, as the site content robustly inputs content – some added through default CMS functionality and others through page-builders.