PhD Research
Papers
Papers
I’m a computer scientist studying how humans and intelligent systems collaborate in uncertain, high-stakes environments. My work spans robotic traversal, human–machine interfaces, human-AI collaboration, and human-in-the-loop control. I focus on robots navigating without vision, embedding safety into autonomy, and designing interfaces that keep people engaged without overload. Broadly, I aim to make complex systems dependable, interpretable, and cooperative—whether it’s a robot moving through darkness or an operator guiding AI safely and intuitively.
I’m a computer scientist studying how humans and intelligent systems collaborate in uncertain, high-stakes environments. My work spans robotic traversal, human–machine interfaces, human-AI collaboration, and human-in-the-loop control. I focus on robots navigating without vision, embedding safety into autonomy, and designing interfaces that keep people engaged without overload. Broadly, I aim to make complex systems dependable, interpretable, and cooperative—whether it’s a robot moving through darkness or an operator guiding AI safely and intuitively.

Watch this video before reading the papers.
It shows Eleven, my eleventh robot, navigating dark, confined spaces using touch instead of vision. Eleven is featured throughout my papers and demonstrates how tactile sensing can guide safe, steady movement when light and vision fail.
Watch this video before reading the papers.
Dissertation: Tactile Traversability in Confined Spaces
Abstract
Autonomous robots are increasingly required to operate in environments that defeat vision and range sensing—unlit corridors, smoke-filled ducts, or confined service tunnels—where geometry is human-scale and contact is inevitable. This dissertation reframes mobility in such settings as a tactile-first problem, asking whether touch alone can sustain progress and safety when perception collapses. The central claim is expressed in the thesis statement: “This dissertation shows that touch alone supports traversal in human-scale, dark, confined spaces. Our traversal policy uses a tactile traversability (TT) cost—a score from touch—to pick the next heading quickly and to choose between going faster or being more careful; we show this on our recorded runs.”
A scalar tactile traversability (TT) cost is introduced to summarize local contact patterns in real time. Evaluated over a finite steering fan, it provides a directionally meaningful measure of traversability from contact primitives alone. A reactive traversal policy minimizes this cost each control cycle to select heading and maps the minimum to a two-regime speed law that switches between go-faster and be-careful behaviors. To ensure operational safety without sacrificing progress, a software safety filter projects each command onto a bounded-risk set defined by measured and predicted contact rates, guaranteeing constant-time enforcement of curvature, speed, and compute limits. Finally, a shared human–machine interface (HMI) blends operator intent with the autonomous policy through deterministic, risk-capped arbitration, preserving both interpretability and bounded decision latency.
Evidence comes exclusively from recorded physical runs on standardized confined-space courses. Results show that touch-only traversal completes all courses with non-inferior rate-of-advance relative to vision-based baselines, maintains bounded compute overhead, and reduces contacts per meter under software control. The TT scalar predicts near-term risk and progress, while the memory term suppresses reversals. Shared-control experiments demonstrate stable, lighting-agnostic arbitration and intelligible disagreement patterns between operator and autonomy.
The findings establish that a single tactile scalar, evaluated at control rate and coupled with a constant-time safety filter and shared-control blend, is sufficient for reliable traversal in human-scale, dark, confined spaces. This work positions intentional contact as a first-class perceptual channel and provides an auditable, deployable fallback for degraded-sensing autonomy.
Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments
Traversal by Touch: Tactile-Based Robotic Traversal with Artificial Skin in Complex Environments
Abstract
We study traversal in a standardized DHS figure-8 course using a two- way, repeated-measures design with factors Algorithm (tactile M1/M2/M3; camera baseline CB-V; tactile baseline T-VFH; optional T-D*Lite) and Lighting (Indoor, Outdoor, Dark). Our stack is tactile-first and does not rely on illumination or texture. Across 660 trials, the memory-augmented policy (M3) and the overall tactile stack are competitive with a classical monocular camera baseline (CB-V) on aggregate performance across lighting, while maintaining stable policy latency (~21 ms p50 across tiers) and mid-80% success. Speed-wise, M3 is consistently slower than CB-V—by ~3–4% in Indoor and ~13–16% in Outdoor/Dark conditions. A pre-specified two one-sided tests (TOST) analysis found no evidence for speed equivalence in any M3↔CB-V comparison. These results indicate that a tactile-first, memory-augmented stack can traverse confined courses without dependence on illumination, while trading a modest reduction in speed for robustness and sensing independence. We report full latency distributions, rate-of-advance, and success statistics, and release per-trial logs to support replication.
We study traversal in a standardized DHS figure-8 course using a two- way, repeated-measures design with factors Algorithm (tactile M1/M2/M3; camera baseline CB-V; tactile baseline T-VFH; optional T-D*Lite) and Lighting (Indoor, Outdoor, Dark). Our stack is tactile-first and does not rely on illumination or texture. Across 660 trials, the memory-augmented policy (M3) and the overall tactile stack are competitive with a classical monocular camera baseline (CB-V) on aggregate performance across lighting, while maintaining stable policy latency (~21 ms p50 across tiers) and mid-80% success. Speed-wise, M3 is consistently slower than CB-V—by ~3–4% in Indoor and ~13–16% in Outdoor/Dark conditions. A pre-specified two one-sided tests (TOST) analysis found no evidence for speed equivalence in any M3↔CB-V comparison. These results indicate that a tactile-first, memory-augmented stack can traverse confined courses without dependence on illumination, while trading a modest reduction in speed for robustness and sensing independence. We report full latency distributions, rate-of-advance, and success statistics, and release per-trial logs to support replication.
Software-Only Safety Assurance for Tactile Navigation via Offline Log Replay and Synthetic Scenarios
Software-Only Safety Assurance for Tactile Navigation via Offline Log Replay and Synthetic Scenarios
Abstract
How can we provide software-only safety assurances for tactile navigation using prior logs and synthetic scenarios, without additional hardware tests? This paper answers by framing safety as an offline property of what the robot has already experienced or could plausibly encounter. Building on our earlier traversal-by-touch study (memory-augmented controller), we address the remaining gap: safety assurance in vision-denied, contact-rich settings. Our framework replays recorded runs, injects synthetic faults (sensor noise, dropouts, bias), checks lightweight formal properties (pressure/force thresholds; stall/collision predicates), and triggers a software-level fallback (halt or re-route) upon violation. Across 660 trials (99.09 h) spanning indoor, outdoor, and dark conditions, log-driven analyses—without any new sensors or field tests—detect >90% of unsafe events and reduce collisions/stalls by ~50% at low computational overhead. The method is practical (runs on commodity hardware) and accessible (replaces costly instrumentation and site-time) while preserving baseline performance. By turning historical logs into a safety oracle and systematically exploring counterfactuals via targeted perturbations, the framework provides actionable, software-only assurances that scale to diverse platforms and terrains, with broader impact for robots operating in vision-denied or otherwise contact-intensive environments.
How can we provide software-only safety assurances for tactile navigation using prior logs and synthetic scenarios, without additional hardware tests? This paper answers by framing safety as an offline property of what the robot has already experienced or could plausibly encounter. Building on our earlier traversal-by-touch study (memory-augmented controller), we address the remaining gap: safety assurance in vision-denied, contact-rich settings. Our framework replays recorded runs, injects synthetic faults (sensor noise, dropouts, bias), checks lightweight formal properties (pressure/force thresholds; stall/collision predicates), and triggers a software-level fallback (halt or re-route) upon violation. Across 660 trials (99.09 h) spanning indoor, outdoor, and dark conditions, log-driven analyses—without any new sensors or field tests—detect >90% of unsafe events and reduce collisions/stalls by ~50% at low computational overhead. The method is practical (runs on commodity hardware) and accessible (replaces costly instrumentation and site-time) while preserving baseline performance. By turning historical logs into a safety oracle and systematically exploring counterfactuals via targeted perturbations, the framework provides actionable, software-only assurances that scale to diverse platforms and terrains, with broader impact for robots operating in vision-denied or otherwise contact-intensive environments.
What Makes a Space Traversable? A Formal Definition and On-Policy Certificate for Contact-Rich Egress in Confined Environments
What Makes a Space Traversable? A Formal Definition and On-Policy Certificate for Contact-Rich Egress in Confined Environments
Abstract
When is an unknown, confined environment traversable for a specific ground robot using only touch? We answer by (i) giving an environment-anchored definition of traversability, written as: the traversability value equals the maximum, over all possible start-to-goal paths, of the minimum margin along the path. The bottleneck margin combines clearance, curvature relative to a minimum turning radius, slope or step limits, and friction constraints; and (ii) introducing an on-policy tactile certificate (TC) that maintains a conservative, monotone lower bound from partial contact histories. The TC fuses pessimistic free-space from contacts and the robot’s body envelope, the M3 decaying contact memory as a risk prior, and local bend/force-sensing resistor proxies. A certificate is issued when the lower bound is positive and the explored corridor graph connects the start to the goal.
Relative to Papers 1–2 (tactile traversal; offline software assurance), this work formalizes traversability itself and provides a tactile-only, online certificate computable during runs. In a retrospective analysis of 660 trials across indoor, outdoor, and dark conditions: (H1) early TC margin predicts success and traversal time better than contact or dwell heuristics (higher accuracy and R²); (H2) TC predictivity is lighting-invariant; (H3) speed-gating M3 by TC margin recovers part of the camera baseline speed gap without degrading success. Artifacts include an open-source implementation, explored-corridor graphs, and per-trial TC time-series added to the Paper-1 log bundle.
When is an unknown, confined environment traversable for a specific ground robot using only touch? We answer by (i) giving an environment-anchored definition of traversability, written as: the traversability value equals the maximum, over all possible start-to-goal paths, of the minimum margin along the path. The bottleneck margin combines clearance, curvature relative to a minimum turning radius, slope or step limits, and friction constraints; and (ii) introducing an on-policy tactile certificate (TC) that maintains a conservative, monotone lower bound from partial contact histories. The TC fuses pessimistic free-space from contacts and the robot’s body envelope, the M3 decaying contact memory as a risk prior, and local bend/force-sensing resistor proxies. A certificate is issued when the lower bound is positive and the explored corridor graph connects the start to the goal.
Relative to Papers 1–2 (tactile traversal; offline software assurance), this work formalizes traversability itself and provides a tactile-only, online certificate computable during runs. In a retrospective analysis of 660 trials across indoor, outdoor, and dark conditions: (H1) early TC margin predicts success and traversal time better than contact or dwell heuristics (higher accuracy and R²); (H2) TC predictivity is lighting-invariant; (H3) speed-gating M3 by TC margin recovers part of the camera baseline speed gap without degrading success. Artifacts include an open-source implementation, explored-corridor graphs, and per-trial TC time-series added to the Paper-1 log bundle.
Shared-Control HMI for Tactile-First Traversal: Offline Counterfactual Evaluation with Haptic Safety Projection
Shared-Control HMI for Tactile-First Traversal: Offline Counterfactual Evaluation with Haptic Safety Projection
Abstract
Supervising tactile-first robotic traversal in confined, uncertain spaces poses a challenge: operators must intervene without incurring cognitive overload. We present a human–machine interface (HMI) that blends operator commands with safety-constrained autonomy and surfaces risk through predictive haptic alerts. Using offline, log-driven replay of 660 trials, we counterfactually evaluate this HMI without new user studies. Results show consistent improvements: predicted collisions decrease, minimum clearance increases, traversal time and path length improve, and the traversability certificate margin rises. Operator–autonomy disagreement is reduced, with smoother control and fewer heading reversals, particularly under algorithms M2 and M3. Importantly, haptic alerts anticipate safety-critical events with positive lead time, achieving high precision and recall as objective measures of informativeness. Together, these findings indicate that shared-control blending with tactile-first autonomy can enhance safety, efficiency, and assurance while reducing conflict between operator intent and autonomy. Contributions include the method (counterfactual shared control with safety projection), metrics for safety/efficiency/assurance/conflict, empirical results across 660 trials, and release of replay and haptic-synthesis artifacts. This positions tactile-first HMI as a practical pathway for safe, low-overhead operator supervision in vision-denied, contact-rich environments.
Supervising tactile-first robotic traversal in confined, uncertain spaces poses a challenge: operators must intervene without incurring cognitive overload. We present a human–machine interface (HMI) that blends operator commands with safety-constrained autonomy and surfaces risk through predictive haptic alerts. Using offline, log-driven replay of 660 trials, we counterfactually evaluate this HMI without new user studies. Results show consistent improvements: predicted collisions decrease, minimum clearance increases, traversal time and path length improve, and the traversability certificate margin rises. Operator–autonomy disagreement is reduced, with smoother control and fewer heading reversals, particularly under algorithms M2 and M3. Importantly, haptic alerts anticipate safety-critical events with positive lead time, achieving high precision and recall as objective measures of informativeness. Together, these findings indicate that shared-control blending with tactile-first autonomy can enhance safety, efficiency, and assurance while reducing conflict between operator intent and autonomy. Contributions include the method (counterfactual shared control with safety projection), metrics for safety/efficiency/assurance/conflict, empirical results across 660 trials, and release of replay and haptic-synthesis artifacts. This positions tactile-first HMI as a practical pathway for safe, low-overhead operator supervision in vision-denied, contact-rich environments.
Dissertation: Tactile Traversability in Confined Spaces
Abstract
Autonomous robots are increasingly required to operate in environments that defeat vision and range sensing—unlit corridors, smoke-filled ducts, or confined service tunnels—where geometry is human-scale and contact is inevitable. This dissertation reframes mobility in such settings as a tactile-first problem, asking whether touch alone can sustain progress and safety when perception collapses. The central claim is expressed in the thesis statement: “This dissertation shows that touch alone supports traversal in human-scale, dark, confined spaces. Our traversal policy uses a tactile traversability (TT) cost—a score from touch—to pick the next heading quickly and to choose between going faster or being more careful; we show this on our recorded runs.”
A scalar tactile traversability (TT) cost is introduced to summarize local contact patterns in real time. Evaluated over a finite steering fan, it provides a directionally meaningful measure of traversability from contact primitives alone. A reactive traversal policy minimizes this cost each control cycle to select heading and maps the minimum to a two-regime speed law that switches between go-faster and be-careful behaviors. To ensure operational safety without sacrificing progress, a software safety filter projects each command onto a bounded-risk set defined by measured and predicted contact rates, guaranteeing constant-time enforcement of curvature, speed, and compute limits. Finally, a shared human–machine interface (HMI) blends operator intent with the autonomous policy through deterministic, risk-capped arbitration, preserving both interpretability and bounded decision latency.
Evidence comes exclusively from recorded physical runs on standardized confined-space courses. Results show that touch-only traversal completes all courses with non-inferior rate-of-advance relative to vision-based baselines, maintains bounded compute overhead, and reduces contacts per meter under software control. The TT scalar predicts near-term risk and progress, while the memory term suppresses reversals. Shared-control experiments demonstrate stable, lighting-agnostic arbitration and intelligible disagreement patterns between operator and autonomy.
The findings establish that a single tactile scalar, evaluated at control rate and coupled with a constant-time safety filter and shared-control blend, is sufficient for reliable traversal in human-scale, dark, confined spaces. This work positions intentional contact as a first-class perceptual channel and provides an auditable, deployable fallback for degraded-sensing autonomy.
It shows Eleven, my eleventh robot, navigating dark, confined spaces using touch instead of vision. Eleven is featured throughout my papers and demonstrates how tactile sensing can guide safe, steady movement when light and vision fail.
Watch this video before reading the papers.
About
Adam Mazurick is a PhD candidate in Computer Science, expected to graduate in 2026. He also holds a Master’s in Human-Computer Interaction and an Honours degree in Fine Art. As design team lead at a top-10 firm, he delivers human-centered solutions that meet both user and organizational needs.
Adam Mazurick is a PhD candidate in Computer Science, expected to graduate in 2026. He also holds a Master’s in Human-Computer Interaction and an Honours degree in Fine Art. As design team lead at a top-10 firm, he delivers human-centered solutions that meet both user and organizational needs.
Links
