
BEST OUTCOME
Smart Glasses for Brain Tumor Detection and Description from MRI Image
An AI-powered smart glasses system delivering real-time, hands-free brain tumor detection and interpretation during surgery.
DEMONSTRATED CAPABILITY
System-Level Perception Design
🎓 Graduate-level Systems Thinking
Designed and evaluated an interactive 3D reconstruction system by balancing neural rendering fidelity, geometric consistency, and hardware constraints to support scalable AR/VR experiences.
Analyzed and compared alternative reconstruction approaches to select methods that balance visual quality, computational efficiency, and interactivity under constrained resources.
Compared alternative technical approaches by analyzing trade-offs in accuracy, robustness, and resource constraints to inform system-level design decisions.
Compared alternative technical approaches by analyzing trade-offs in accuracy, robustness, and resource constraints to inform system-level design decisions.
What This Project Achieves
This project explores how artificial intelligence and augmented reality can reduce cognitive burden during complex medical diagnoses. The team developed a smart glasses system that integrates real-time brain tumor detection from MRI images with hands-free visual and textual feedback. By combining computer vision and natural language descriptions in a lightweight wearable form, the project demonstrates how AI-assisted tools can enhance surgical precision, improve workflow efficiency, and support more patient-centered healthcare.
How This Was Built — Key Highlights
This project integrated computer vision, natural language processing, and augmented reality to deliver real-time tumor insights through a wearable smart glasses system. The workflow was designed to support hands-free operation while maintaining accuracy, speed, and usability in surgical settings.
Trained a YOLOv11 model on the Br35H MRI dataset to detect brain tumor regions with high accuracy.
Integrated the Grok3 API to generate concise, context-aware natural language descriptions of tumor characteristics such as size, location, and relative risk.
Designed a lightweight AR interface to overlay detection results directly into the user’s field of view.
Built a real-time processing pipeline to support image upload or live capture with immediate analysis and display.
Aligned hardware assumptions with real-world AR-glass specifications to ensure system feasibility and practicality.
Challenges
Developing a real-time, wearable medical AI system introduced several technical and design challenges that required careful trade-offs between performance, usability, and feasibility.
Limited battery life constrained prolonged use, requiring model optimization through pruning and quantization to reduce power consumption.
Dependence on Wi-Fi connectivity for real-time analysis posed reliability concerns in hospital environments.
Balancing detection accuracy with hardware constraints required iterative refinement of both software and system assumptions.
Insights
Through building and testing the system, the project revealed important insights about deploying AI models in real-world medical and hardware-constrained environments.
Hands-free, AR-assisted interfaces can significantly reduce cognitive load for clinicians during high-pressure tasks.
Combining computer vision with natural language explanations improves interpretability and trust in AI-assisted diagnostics.
System-level design choices, including hardware specifications and power constraints, are as critical as model accuracy in healthcare applications.
Project Gallery
Academic Team Feedback
Feedback from the Project Lead—an MIT AI and hardware researcher with industry experience in developing wearable AR/VR AI systems—highlighted the team’s strong technical execution and clear communication throughout the project. Drawing on his background in integrated photonic devices, AI-driven healthcare, and real-world hardware deployment, he noted that a key strength of the work was its grounding in realistic AR-glass specifications, demonstrating strong system awareness and market feasibility. The final presentation was well structured and effectively conveyed the end-to-end development process, from model design to system integration. For future improvement, the Project Lead suggested deeper quantitative benchmarking of the YOLOv11 model against alternative approaches, as well as more detailed consideration of hardware constraints and usability in moving toward truly hands-free operation.
Project Reflection
This project showed me how advanced AI models can be translated into practical, wearable healthcare tools with real clinical impact. It reinforced the importance of system-level thinking, where model performance, hardware constraints, and user experience must work together to support patient-centered care.





