Designing Intelligent Content Experiences with AI Summaries & Audio
Client
RapidSOS
project type
Generative AI,
UI/UX Design
Team
Design, Development, PM
Overview
Developed an AI-powered multimedia content experience across text and audio to increase accessibility and drive engagement.
live link

MY WORK
Analyzing Microsoft Clarity results to identify user interactions and behavior
Through user research using Microsoft Clarity's user behavior analysis, I identified that over 70% of user engagement dropped off after-the-fold content, with over 92% of engagement dropping off within the first four screenfuls of content. With this in mind, I wanted to design multiple avenues for users to engage with content, each with low-interaction cost. With the marketing team wanting to establish a more AI-first business position, this led to the idea of intelligent summaries and a text-to-speech feature.

MY WORK
Early iteration and exploration of API integration
Based on the prior heat-mapping analysis, I ideated on different user journeys that nested media content modules within the hero section of the page. The key pain points I wanted to address were skimming behavior and high drop-off rate after the first page full of content. I considered two different approaches. The first introduced a widget-based solution, creating a console-like experience that allowed users to interact with audio and AI-generated summaries directly within the page. The second approach utilized a lightweight pop-up treatment, enabling users to access this content while preserving the blog’s featured imagery and overall layout. After concept testing with the marketing team and the other designer on the team, we noted that the initial approach, while exciting, also led to less engagement with the page as a whole.
My Work
Leveraging APIs and AI-powered voice synthesis to create a multimodal user experience
The key integrations involved in this project included connecting OpenAI’s API to our content management system to generate the summaries, as well as ElevenLabs, a generative AI audio platform. Using custom PHP and JavaScript, I was able to link user interactions on the page (button click) to trigger API calls that would then output on our site in a pop-up module. I also created custom fields in the back end of the site that would dynamically detect and read content on the page. An interesting challenge presented by this process is how to detect different types of content on a given page, as the site content robustly inputs content – some added through default CMS functionality and others through page-builders.










