Secure and Accurate Implicit Authentication and Continuous Monitoring of Everyday Object Usage for Individuals with Disabilities
Researchers will address existing accessibility gaps in authentication technologies, ensuring that individuals with disabilities can seamlessly and securely interact with everyday objects in their environments, facilitating their independence and fostering a sense of inclusiveness in their interactions with the world.
Funded by the CCI Hub
Project Investigators
- Principal Investigator (PI): Lannan Lisa Luo, George Mason University Computer Science Department
- Co-PI: Qiang Zeng, Mason Computer Science Department
Rationale and Background
Everyday environments contain a variety of objects, many of which are security/privacy sensitive, requiring users to authenticate themselves. In addition, monitoring usage is essential. While authentication is straightforward for many, it can prove challenging for those with physical or mental disabilities. Existing products, designed for average users, often exhibit poor usability or low security when used by those with disabilities.
Methodology
Researchers will use their AuthDumbObj system, based on a causal relationship: an object has motion usually because a human moves it. Authentication and monitoring are converted to motion-data correlation.
Researchers will capture the owner’s hand/foot movement data via a wearable device, and capture the object’s motion data via a sensor node. When an object is being used, motion data from the sensor node and the wearable device is collected, and a correlation score between the two data streams is calculated to determine if the use is legitimate.
Projected Outcomes
Researchers will design and implement a secure and accurate implicit authentication and continuous monitoring system for everyday object usage, with a specific emphasis on meeting the needs of individuals with disabilities.
They will seek additional funding from such sources as the NSF Disability and Rehabilitation (DARE) program.
They will also make the source code, trained models, data sets, and technique details available at GitHub.