Research Summary


My primary research interests are Internet-of-Things (IoT) with focus on Usable Privacy, Human-Computer Interaction (HCI), Wireless Sensing, mmWave and Signal Processing

You can browse my projects below.


Research Projects


Developed a framework that can facilitate near-optimal distribution of critical supplies during a public health crisis and investigated how well a collaborative resource sharing system will perform in the face of a pandemic such as COVID-19.

This project is in collaboration with Bryan Bednarski and William Jones.

This work investigates how reinforcement learning and deep learning models can facilitate the near-optimal redistribution of medical equipment in order to bolster public health responses to future crises similar to the COVID-19 pandemic. The system presented is simulated with disease impact statistics from the Institute of Health Metrics (IHME), Center for Disease Control, and Census Bureau. We present a robust pipeline for data preprocessing, future demand inference, and a redistribution algorithm that can be adopted across broad scales and applications.

Detecting and localizing hidden sensors in a space

Developed a framework to not only detect common Wi-Fi based wireless sensors that are actively monitoring a user, but also classify and localize each device.

This project is in collaboration with my labmates Luis Garcia, Joseph Noor and my advisor Prof. Mani Srivastava.

The increasing ubiquity of low-cost wireless sensors has enabled users to easily deploy systems to remotely monitor and control their environments. However, this raises privacy concerns for third-party occupants, such as a hotel room guest who may be unaware of deployed clandestine sensors. Previous methods focused on specific modalities such as detecting cameras, but do not provide a generalized and comprehensive method to capture arbitrary sensors which may be "spying" on a user. In this work, we propose SnoopDog, a framework to not only detect common Wi-Fi based wireless sensors that are actively monitoring a user, but also classify and localize each device. SnoopDog works by establishing causality between patterns in observable wireless traffic and a trusted sensor in the same space, e.g., an inertial measurement unit (IMU) that captures a user's movement. Once causality is established, SnoopDog performs packet inspection to inform the user about the monitoring device. Finally, SnoopDog localizes the clandestine device in a 2D plane using a novel trial-based localization technique. We evaluated SnoopDog across several devices and various modalities, and were able to detect causality for snooping devices 95.2% of the time, and localize devices to a sufficiently reduced sub-space.

Developed a framework to perform accurate Human Activity Recognition (HAR) using a mmWave radar

This project is in collaboration with my labmates Sandeep Singh Sandha, Luis Garcia and my advisor Prof. Mani Srivastava.

Accurate human activity recognition (HAR) is the key to enable emerging context-aware applications that require an understanding and identification of human behavior, e.g., monitoring disabled or elderly people who live alone. Traditionally, HAR has been implemented either through ambient sensors, e.g., cameras, or through wearable devices, e.g., a smartwatch, with an inertial measurement unit (IMU). The ambient sensing approach is typically more generalizable for different environments as this does not require every user to have a wearable device. However, utilizing a camera in privacy-sensitive areas such as a home may capture superfluous ambient information that a user may not feel comfortable sharing. Radars have been proposed as an alternative modality for coarse-grained activity recognition that captures a minimal subset of the ambient information using micro-Doppler spectrograms. However, training fine-grained, accurate activity classifiers is a challenge as low-cost millimeter-wave (mmWave) radar systems produce sparse and non-uniform point clouds. In this paper, we propose RadHAR, a framework that performs accurate HAR using sparse and non-uniform point clouds. RadHAR utilizes a sliding time window to accumulate point clouds from a mmWave radar and generate a voxelized representation that acts as input to our classifiers. We evaluate RadHAR using a low-cost, commercial, off-the-shelf radar to get sparse point clouds which are less visually compromising. We evaluate and demonstrate our system on a collected human activity dataset with 5 different activities. We compare the accuracy of various classifiers on the dataset and find that the best performing deep learning classifier achieves an accuracy of 90.47%. Our evaluation shows the efficacy of using mmWave radar for accurate HAR detection and we enumerate future research directions in this space.