A.L.B.E.R.T (the awareness lab's big enlightenment realization technology) is an experimental work and a functioning prototype by the awareness lab for the application of contemplative training. It demonstrates a speculative future of Artificial Intelligence assisted learning, multi-modal biofeedback, real-time algorithmic cognitive assessment, generative audio / visual environments, and participant specific learning.
This emergent and advanced technology may be crude compared to the elegance of thousands of years of tradition, tools, and technique from qualified teachers. Yet the ambition is the same; help someone discover themselves, train their attention, and explore their ontology through instruction, interpretation, and feedback. While perhaps crude, this platform (and others emerging like it) is novel to the known herstory in the sense that the media environment is generating itself based on the internal and external states of the participant, thereby potentially illuminating unique emerging affordances for growth in this field.
A.L.B.E.R.T is built of three parts; live simulation from a game engine, a virtual reality head mounted display with biosensors and a machine learning algorithm to assess real-time cognitive load, and an artificial intelligence language model trained on vadryana buddhism and secular mindfulness. *the awareness lab is neutral to the methodologies chosen.
The experience is narrative, experiential, generative, and responsive to the participants biorhythms and attention. The environment, UI, mental maps, and guided feedback all change and modify based on targets defined the lab or the hardware.
Team:
Jesse Reding Fleming: director + concept + Game Engine Developer
Shane Bolan: Technical Director + Game Engine Developer
Trystan Nord: Game Engine Developer + sound design
Max Urbany: machine learning + artificial intelligence design and development
Sound: Miguel De Pedro AKA kid606
Maital Neta: cognitive scientist
Mike Dodd: cognitive scientist
Equipment Sponsor: HP Educause / HP Omnicept
Funded by the university of Nebraska's Office of research and economic development (ORED)

At the base level, Albert is a PC run VR application. Albert's environments and code were created inside Unreal Engine 5, and it uses many plugins to talk to various external tools and software to make a single cohesive experience.
The first of the external tools that allow Albert to function is the HP Omnicept. It is a VR device that allows us to have access to certain aspects of a user's biometric data. This includes real time data such as: eye tracking, pupil dilation, heart rate, and cognitive load. This data is utilized in a few separate ways. First, Albert uses the data to effect and change the virtual world around the user in ways that mirror the physical and mental state of the user in abstract ways.
In one part of the visualization, the user's cognitive load changes the color of the sky, ranging from blue to pink. meanwhile where they are looking with their eyes causes various parts of the environment around them to react both visually and audibly, and as parts of the environment pulse and move with an abstraction of the user's heart rate.  
Another part of the visualization uses a modified version of the apollonian gasket fractal to act as the set piece for the scene. In this portion we tie the user's biometric data to various variables in the fractal. Both the scale and number of iterations of the fractal are tied to cognitive load. It is also mapped to a post processing effect that shifts the position of the visuals color channels.
In addition to this, we also make use of a few different AI tools. These are used to allow the user to converse with the AI and ask for guidance about mindfulness practices. To start, we parse the user's voice input into text using Vosk, and combine it with context about what the user's biometric data is telling us and what the visualization looks like. That is then sent to ChatGPT to process and create informed guidance as to help the user in their mindfulness training. That guidance is then vocalized back to the user using Unreal's built in text to speech.
BACK TO TOP