PhD Research

Papers

Papers

I’m a computer scientist studying how humans and intelligent systems collaborate in uncertain, high-stakes environments. My work spans robotic traversal, human–machine interfaces, human-AI collaboration, and human-in-the-loop control. I focus on robots navigating without vision, embedding safety into autonomy, and designing interfaces that keep people engaged without overload. Broadly, I aim to make complex systems dependable, interpretable, and cooperative—whether it’s a robot moving through darkness or an operator guiding AI safely and intuitively.

I’m a computer scientist studying how humans and intelligent systems collaborate in uncertain, high-stakes environments. My work spans robotic traversal, human–machine interfaces, human-AI collaboration, and human-in-the-loop control. I focus on robots navigating without vision, embedding safety into autonomy, and designing interfaces that keep people engaged without overload. Broadly, I aim to make complex systems dependable, interpretable, and cooperative—whether it’s a robot moving through darkness or an operator guiding AI safely and intuitively.

Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments

Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments

Abstract

We study traversal in a standardized DHS figure-8 course using a two- way, repeated-measures design with factors Algorithm (tactile M1/M2/M3; camera baseline CB-V; tactile baseline T-VFH; optional T-D*Lite) and Lighting (Indoor, Outdoor, Dark). Our stack is tactile-first and does not rely on illumination or texture. Across 660 trials, the memory-augmented policy (M3) and the overall tactile stack are competitive with a classical monocular camera baseline (CB-V) on aggregate performance across lighting, while maintaining stable policy latency (~21 ms p50 across tiers) and mid-80% success. Speed-wise, M3 is consistently slower than CB-V—by ~3–4% in Indoor and ~13–16% in Outdoor/Dark conditions. A pre-specified two one-sided tests (TOST) analysis found no evidence for speed equivalence in any M3↔CB-V comparison. These results indicate that a tactile-first, memory-augmented stack can traverse confined courses without dependence on illumination, while trading a modest reduction in speed for robustness and sensing independence. We report full latency distributions, rate-of-advance, and success statistics, and release per-trial logs to support replication.

We study traversal in a standardized DHS figure-8 course using a two- way, repeated-measures design with factors Algorithm (tactile M1/M2/M3; camera baseline CB-V; tactile baseline T-VFH; optional T-D*Lite) and Lighting (Indoor, Outdoor, Dark). Our stack is tactile-first and does not rely on illumination or texture. Across 660 trials, the memory-augmented policy (M3) and the overall tactile stack are competitive with a classical monocular camera baseline (CB-V) on aggregate performance across lighting, while maintaining stable policy latency (~21 ms p50 across tiers) and mid-80% success. Speed-wise, M3 is consistently slower than CB-V—by ~3–4% in Indoor and ~13–16% in Outdoor/Dark conditions. A pre-specified two one-sided tests (TOST) analysis found no evidence for speed equivalence in any M3↔CB-V comparison. These results indicate that a tactile-first, memory-augmented stack can traverse confined courses without dependence on illumination, while trading a modest reduction in speed for robustness and sensing independence. We report full latency distributions, rate-of-advance, and success statistics, and release per-trial logs to support replication.

Software-Only Safety Assurance for Tactile Navigation via Offline Log Replay and Synthetic Scenarios

Software-Only Safety Assurance for Tactile Navigation via Offline Log Replay and Synthetic Scenarios

Abstract

How can we provide software-only safety assurances for tactile navigation using prior logs and synthetic scenarios, without additional hardware tests? This paper answers by framing safety as an offline property of what the robot has already experienced or could plausibly encounter. Building on our earlier traversal-by-touch study (memory-augmented controller), we address the remaining gap: safety assurance in vision-denied, contact-rich settings. Our framework replays recorded runs, injects synthetic faults (sensor noise, dropouts, bias), checks lightweight formal properties (pressure/force thresholds; stall/collision predicates), and triggers a software-level fallback (halt or re-route) upon violation. Across 660 trials (99.09 h) spanning indoor, outdoor, and dark conditions, log-driven analyses—without any new sensors or field tests—detect >90% of unsafe events and reduce collisions/stalls by ~50% at low computational overhead. The method is practical (runs on commodity hardware) and accessible (replaces costly instrumentation and site-time) while preserving baseline performance. By turning historical logs into a safety oracle and systematically exploring counterfactuals via targeted perturbations, the framework provides actionable, software-only assurances that scale to diverse platforms and terrains, with broader impact for robots operating in vision-denied or otherwise contact-intensive environments.

How can we provide software-only safety assurances for tactile navigation using prior logs and synthetic scenarios, without additional hardware tests? This paper answers by framing safety as an offline property of what the robot has already experienced or could plausibly encounter. Building on our earlier traversal-by-touch study (memory-augmented controller), we address the remaining gap: safety assurance in vision-denied, contact-rich settings. Our framework replays recorded runs, injects synthetic faults (sensor noise, dropouts, bias), checks lightweight formal properties (pressure/force thresholds; stall/collision predicates), and triggers a software-level fallback (halt or re-route) upon violation. Across 660 trials (99.09 h) spanning indoor, outdoor, and dark conditions, log-driven analyses—without any new sensors or field tests—detect >90% of unsafe events and reduce collisions/stalls by ~50% at low computational overhead. The method is practical (runs on commodity hardware) and accessible (replaces costly instrumentation and site-time) while preserving baseline performance. By turning historical logs into a safety oracle and systematically exploring counterfactuals via targeted perturbations, the framework provides actionable, software-only assurances that scale to diverse platforms and terrains, with broader impact for robots operating in vision-denied or otherwise contact-intensive environments.

What Makes a Space Traversable? A Formal Definition and On-Policy Certificate for Contact-Rich Egress in Confined Environments

What Makes a Space Traversable? A Formal Definition and On-Policy Certificate for Contact-Rich Egress in Confined Environments

Abstract

When is an unknown, confined environment traversable for a specific ground robot using only touch? We answer by (i) giving an environment-anchored definition of traversability, written as: the traversability value equals the maximum, over all possible start-to-goal paths, of the minimum margin along the path. The bottleneck margin combines clearance, curvature relative to a minimum turning radius, slope or step limits, and friction constraints; and (ii) introducing an on-policy tactile certificate (TC) that maintains a conservative, monotone lower bound from partial contact histories. The TC fuses pessimistic free-space from contacts and the robot’s body envelope, the M3 decaying contact memory as a risk prior, and local bend/force-sensing resistor proxies. A certificate is issued when the lower bound is positive and the explored corridor graph connects the start to the goal.

Relative to Papers 1–2 (tactile traversal; offline software assurance), this work formalizes traversability itself and provides a tactile-only, online certificate computable during runs. In a retrospective analysis of 660 trials across indoor, outdoor, and dark conditions: (H1) early TC margin predicts success and traversal time better than contact or dwell heuristics (higher accuracy and R²); (H2) TC predictivity is lighting-invariant; (H3) speed-gating M3 by TC margin recovers part of the camera baseline speed gap without degrading success. Artifacts include an open-source implementation, explored-corridor graphs, and per-trial TC time-series added to the Paper-1 log bundle.

When is an unknown, confined environment traversable for a specific ground robot using only touch? We answer by (i) giving an environment-anchored definition of traversability, written as: the traversability value equals the maximum, over all possible start-to-goal paths, of the minimum margin along the path. The bottleneck margin combines clearance, curvature relative to a minimum turning radius, slope or step limits, and friction constraints; and (ii) introducing an on-policy tactile certificate (TC) that maintains a conservative, monotone lower bound from partial contact histories. The TC fuses pessimistic free-space from contacts and the robot’s body envelope, the M3 decaying contact memory as a risk prior, and local bend/force-sensing resistor proxies. A certificate is issued when the lower bound is positive and the explored corridor graph connects the start to the goal.

Relative to Papers 1–2 (tactile traversal; offline software assurance), this work formalizes traversability itself and provides a tactile-only, online certificate computable during runs. In a retrospective analysis of 660 trials across indoor, outdoor, and dark conditions: (H1) early TC margin predicts success and traversal time better than contact or dwell heuristics (higher accuracy and R²); (H2) TC predictivity is lighting-invariant; (H3) speed-gating M3 by TC margin recovers part of the camera baseline speed gap without degrading success. Artifacts include an open-source implementation, explored-corridor graphs, and per-trial TC time-series added to the Paper-1 log bundle.

Shared-Control HMI for Tactile-First Traversal: Offline Counterfactual Evaluation with Haptic Safety Projection

Shared-Control HMI for Tactile-First Traversal: Offline Counterfactual Evaluation with Haptic Safety Projection

Abstract

Supervising tactile-first robotic traversal in confined, uncertain spaces poses a challenge: operators must intervene without incurring cognitive overload. We present a human–machine interface (HMI) that blends operator commands with safety-constrained autonomy and surfaces risk through predictive haptic alerts. Using offline, log-driven replay of 660 trials, we counterfactually evaluate this HMI without new user studies. Results show consistent improvements: predicted collisions decrease, minimum clearance increases, traversal time and path length improve, and the traversability certificate margin rises. Operator–autonomy disagreement is reduced, with smoother control and fewer heading reversals, particularly under algorithms M2 and M3. Importantly, haptic alerts anticipate safety-critical events with positive lead time, achieving high precision and recall as objective measures of informativeness. Together, these findings indicate that shared-control blending with tactile-first autonomy can enhance safety, efficiency, and assurance while reducing conflict between operator intent and autonomy. Contributions include the method (counterfactual shared control with safety projection), metrics for safety/efficiency/assurance/conflict, empirical results across 660 trials, and release of replay and haptic-synthesis artifacts. This positions tactile-first HMI as a practical pathway for safe, low-overhead operator supervision in vision-denied, contact-rich environments.

Supervising tactile-first robotic traversal in confined, uncertain spaces poses a challenge: operators must intervene without incurring cognitive overload. We present a human–machine interface (HMI) that blends operator commands with safety-constrained autonomy and surfaces risk through predictive haptic alerts. Using offline, log-driven replay of 660 trials, we counterfactually evaluate this HMI without new user studies. Results show consistent improvements: predicted collisions decrease, minimum clearance increases, traversal time and path length improve, and the traversability certificate margin rises. Operator–autonomy disagreement is reduced, with smoother control and fewer heading reversals, particularly under algorithms M2 and M3. Importantly, haptic alerts anticipate safety-critical events with positive lead time, achieving high precision and recall as objective measures of informativeness. Together, these findings indicate that shared-control blending with tactile-first autonomy can enhance safety, efficiency, and assurance while reducing conflict between operator intent and autonomy. Contributions include the method (counterfactual shared control with safety projection), metrics for safety/efficiency/assurance/conflict, empirical results across 660 trials, and release of replay and haptic-synthesis artifacts. This positions tactile-first HMI as a practical pathway for safe, low-overhead operator supervision in vision-denied, contact-rich environments.