Zihan Li, Yan(Jennifer) Zeng, Tingting Luo, Liuchuan Yu, Shuqi Liao
Float Mind
2025
Medium:
Virtual Reality, AI LLM
Float Mind is an AI-powered mixed reality (MR) meditation tool designed to help users relieve stress and enhance mindfulness through immersive AI-driven experiences. Built for Meta Quest, it seamlessly integrates hand-tracking, AI-generated emotional insights, and interactive meditation techniques to create a personalized and engaging relaxation journey.
The experience begins with Flo, an empathetic AI companion that analyzes users’ emotions and visualizes them as interactive 3D bubbles. Using hand gestures, users engage with these emotional elements, dynamically shaping their virtual space. This interaction leads into a guided meditation phase featuring calming breathing exercises, soothing animations, and interactive natural environments, such as growing auroras and nurturing trees.
Float Mind leverages cutting-edge AI models, speech recognition, and real-time emotional analysis to redefine traditional mindfulness practices. By merging neuroscience-backed relaxation techniques with spatial computing, it provides a science-driven, accessible approach to mental wellness.
Aligned with the Fractured Horizons theme, Float Mind envisions a future where AI, automation, and interactive environments redefine emotional well-being, bridging the digital and physical realms to create deeply personalized meditation experiences.
Zihan Li, Yan(Jennifer) Zeng, Tingting Luo, Liuchuan Yu, Shuqi Liao
Float Mind Image
2025
Medium:
Virtual Reality, AI LLM








About the Artists
Yan(Jennifer) Zeng, Zihan Li, Tingting Luo, Liuchuan Yu, Shuqi Liao are UX/Product designers, Game designer and XR developers with backgrounds in human-computer interaction, interactive media, and architecture from the University of Pennsylvania, the University of Virginia, Purdue University and CMU.
Their practice lies at the intersection of UX/Product design, spatial computing, and AI interaction, exploring how space can be redefined in environments shaped by automation and non-human-centered systems. Committed to integrating artificial intelligence and spatial computing to create adaptive intelligent spatial systems that respond to dynamic environments.
Zihan Li, Yan(Jennifer) Zeng, Tingting Luo, Liuchuan Yu, Shuqi Liao
Float Mind
2025
Medium:
Virtual Reality, AI LLM
Float Mind is an AI-powered mixed reality (MR) meditation tool designed to help users relieve stress and enhance mindfulness through immersive AI-driven experiences. Built for Meta Quest, it seamlessly integrates hand-tracking, AI-generated emotional insights, and interactive meditation techniques to create a personalized and engaging relaxation journey.
The experience begins with Flo, an empathetic AI companion that analyzes users’ emotions and visualizes them as interactive 3D bubbles. Using hand gestures, users engage with these emotional elements, dynamically shaping their virtual space. This interaction leads into a guided meditation phase featuring calming breathing exercises, soothing animations, and interactive natural environments, such as growing auroras and nurturing trees.
Float Mind leverages cutting-edge AI models, speech recognition, and real-time emotional analysis to redefine traditional mindfulness practices. By merging neuroscience-backed relaxation techniques with spatial computing, it provides a science-driven, accessible approach to mental wellness.
Aligned with the Fractured Horizons theme, Float Mind envisions a future where AI, automation, and interactive environments redefine emotional well-being, bridging the digital and physical realms to create deeply personalized meditation experiences.
Zihan Li, Yan(Jennifer) Zeng, Tingting Luo, Liuchuan Yu, Shuqi Liao
Float Mind Image
2025
Medium:
Virtual Reality, AI LLM




About the Artists
Yan(Jennifer) Zeng, Zihan Li, Tingting Luo, Liuchuan Yu, Shuqi Liao are UX/Product designers, Game designer and XR developers with backgrounds in human-computer interaction, interactive media, and architecture from the University of Pennsylvania, the University of Virginia, Purdue University and CMU.
Their practice lies at the intersection of UX/Product design, spatial computing, and AI interaction, exploring how space can be redefined in environments shaped by automation and non-human-centered systems. Committed to integrating artificial intelligence and spatial computing to create adaptive intelligent spatial systems that respond to dynamic environments.
Other featured artists
Zihan Li, Yan(Jennifer) Zeng, Tingting Luo, Liuchuan Yu, Shuqi Liao
Float Mind
2025
Medium:
Virtual Reality, AI LLM
Float Mind is an AI-powered mixed reality (MR) meditation tool designed to help users relieve stress and enhance mindfulness through immersive AI-driven experiences. Built for Meta Quest, it seamlessly integrates hand-tracking, AI-generated emotional insights, and interactive meditation techniques to create a personalized and engaging relaxation journey.
The experience begins with Flo, an empathetic AI companion that analyzes users’ emotions and visualizes them as interactive 3D bubbles. Using hand gestures, users engage with these emotional elements, dynamically shaping their virtual space. This interaction leads into a guided meditation phase featuring calming breathing exercises, soothing animations, and interactive natural environments, such as growing auroras and nurturing trees.
Float Mind leverages cutting-edge AI models, speech recognition, and real-time emotional analysis to redefine traditional mindfulness practices. By merging neuroscience-backed relaxation techniques with spatial computing, it provides a science-driven, accessible approach to mental wellness.
Aligned with the Fractured Horizons theme, Float Mind envisions a future where AI, automation, and interactive environments redefine emotional well-being, bridging the digital and physical realms to create deeply personalized meditation experiences.
Zihan Li, Yan(Jennifer) Zeng, Tingting Luo, Liuchuan Yu, Shuqi Liao
Float Mind Image
2025
Medium:
Virtual Reality, AI LLM




About the Artists
Yan(Jennifer) Zeng, Zihan Li, Tingting Luo, Liuchuan Yu, Shuqi Liao are UX/Product designers, Game designer and XR developers with backgrounds in human-computer interaction, interactive media, and architecture from the University of Pennsylvania, the University of Virginia, Purdue University and CMU.
Their practice lies at the intersection of UX/Product design, spatial computing, and AI interaction, exploring how space can be redefined in environments shaped by automation and non-human-centered systems. Committed to integrating artificial intelligence and spatial computing to create adaptive intelligent spatial systems that respond to dynamic environments.