A team at Quinnipiac University has created innovative software designed to assist individuals with mobility challenges in communicating and navigating their environments. The software, named AccessiMove, utilizes artificial intelligence (AI) to interpret facial gestures, allowing users to control devices such as computers and wheelchairs without the need for physical input.
The initiative began after Chetan Jaiswal, an associate professor of computer science, observed a young man with motor impairment struggling to communicate with his parents at an occupational therapy conference in 2022. This experience motivated Jaiswal to explore how technology could better serve those in need. “We are computer scientists,” he stated. “We can do better. Technology should help people who actually need it.”
Collaborating with colleagues Karen Majeski, an associate professor of occupational therapy, and Brian O’Neill, also an associate professor of computer science, Jaiswal brought together students Michael Ruocco and Jack Duggan to develop the university’s first patented software for hands-free communication through facial gestures.
How AccessiMove Works
Utilizing a standard webcam, AccessiMove employs head-tilt detection, wink recognition, and facial landmark tracing to facilitate user commands. This enables individuals with limited mobility to control a computer screen or other devices using only their facial movements. Jaiswal emphasized the significant impact of this technology, stating, “This benefits a lot of people, especially those with disabilities and motor impairment. They can actually use their face to interact with a computer.”
The team is currently seeking partnerships and funding to expand AccessiMove’s applications in various fields, particularly in healthcare. Jaiswal noted the presence of numerous healthcare partners on the US East Coast, including Yale Hospital and Hartford Hospital, who share a common goal of improving patient care. “We are looking for partners who want to make a difference in the world for those who need it,” he added.
Broader Applications and Future Goals
Majeski elaborated on the functionality of the software, explaining that users’ facial gestures act similarly to a computer mouse. For instance, tilting the head left or right can correspond to navigating through applications such as opening a web browser or restarting a device. Eye blinks can trigger mouse clicks or keyboard actions.
The potential applications of AccessiMove extend beyond computers. Jaiswal pointed out that the software can also integrate with wheelchairs. “A person with a disability can sit in the chair and just use gestures to move the wheelchair,” he explained, noting that subtle movements can direct the chair forward or backward. This capability could prove invaluable for seniors in assisted living facilities, enhancing their mobility and independence.
O’Neill emphasized the role of AI in the software’s operation, stating that it is essential for real-time tracking of facial gestures. Additionally, he highlighted the accessibility of AccessiMove, as it requires no specialized hardware—just a standard webcam found in most tablets and smartphones.
Looking ahead, the team aims to ensure that AccessiMove becomes an everyday tool for those who require it, while also being appealing for general users seeking convenience. “The technology is useful in hospital settings,” Jaiswal noted. “Patients can use facial gestures to communicate, especially for those who can’t speak.”
Through collaboration and innovation, the team at Quinnipiac University is striving to make a meaningful difference in the lives of individuals with mobility challenges, reinforcing the notion that technology can and should serve all members of society.
