Our team pursued six key objectives with NOVA: guiding astronauts in UIA egress procedures, displaying vital signs, aiding navigation (pathfinding, waypoint placement, and avoiding danger zones), enabling geo sampling, issuing rover commands, and facilitating messaging between mission control and astronauts. To accomplish this, CLAWS introduced four vital features. The first, the HoloLens UI, allows the user to interact with pop-ups, buttons, alerts, etc. to guide them along a mission. The Voice Entity for Guiding Astronauts, or “VEGA”, is an AI voice assistant that listens to and performs basic user commands through speech-to-text. The Light Unit Navigation Aid, or LUNA, an extension that allows the user to position displays outside their primary FOV. The Mission Control Center is a web application that allows users to communicate, monitor astronaut vitals, and keep track of mission progress.
User testing affirmed NOVA's accomplishments: a sleek, user-friendly interface, versatile input methods, context-responsive voice commands, and effective lighting. Based on user interviews, we've embraced usability insights by delivering concise information and prioritizing eye-gaze and voice commands. In the upcoming phases, CLAWS strives to enhance eye-gaze usability, broaden LUNA's field of vision, deepen MCC integration, and expand VEGA's corpus.
HOSHI is an AR-Assistive application that prioritizes an efficient balance of astronaut safety and
autonomy. This cutting-edge application serves as a companion for navigation, geological sampling and documentation, as well as search and rescue operations. At its core, our system interacts seamlessly with users through the AI-powered voice assistant "VEGA".
Through extensive research, we have identified communication challenges between astronauts and Mission Control, primarily being the overreliance on audio instructions from the Mission Control Center and the constraints of conventional pen-and-paper methods. These hurdles impose a substantial cognitive burden on astronauts. HOSHI rises to the occasion as an intuitive and user-centric application. It emphasizes safety, autonomy, visibility, accessibility, functionality over learnability, minimizing cognitive load, and actionable information. The primary mode of interaction with HOSHI centers around voice commands, executed through the AI voice assistant VEGA. Additionally, secondary interactions such as physical gestures (finger points, gentle hand motions) guide astronauts through their missions. Feedback is then given through visual and audio cues.
Through what we have learned with HOSHI, we plan to expand the scope of interactions with next year’s project. This might mean integrating an array of gestures, haptic feedback, eye gaze tracking, adapting to extreme visibility conditions, speech-to-text parsing, a more flexible voice assistant, and user testing.
ATLAS is an AR system for the Microsoft Hololens, designed as an assistive HUD for astronauts on lunar expeditions and our first iteration of the SUITS challenge. This modular system offers streamlined access to mission-critical information via protocols tailored to various EVA stages: mission planning, suit prep, sample collection, repairs, emergencies, and abort procedures.
While the astronaut conducts EVAs, they are given navigation assistance and access to vital information via voice assistant “VEGA”, as well as tools to aid in geo-sample collection and rover repairs. In the face of emergencies, readily available warnings and a pre-configured abort protocol stand ready to ensure a swift and safe response. With color-coded information levels, redundancy measures using QR codes, and simple hand and voice interactions, the astronauts can intuitively use the system without disrupting the flow of the mission. Additionally, the Mission Control Center (MCC), has the authority to update mission tasks at any point, monitor biometrics and communicate with the astronaut. Beyond this, the MCC supports off-site mission planners and scientists, fulfilling the dual roles of mission command hub and data repository.
Despite the challenges during the ongoing COVID-19 pandemic, the team was able to fully design the system. At this point, the core software has been developed and the UI elements have been created. In the future, additional work will be needed to integrate the UI into the main AR application and expand the abilities of VEGA.