Uncovering the Mechanics of Vision AI Model Failures: Textures and Beyond
Blaine Hoak
University of Wisconsin-Madison
(hosted by Christof Paar)
(hosted by Christof Paar)
16 Feb 2026, 10:00 am - 11:00 am
Bochum building MPI-SP, room MB/1-84/90
CIS@MPG Colloquium
Artificial Intelligence (AI) models now serve as core components to a range of
mature applications but remain vulnerable to a wide spectrum of attacks. Yet,
the research community has yet to develop systematic understanding of model
vulnerability. In this talk, I approach uncovering the mechanics of model
failure from two complementary perspectives: the design of attack techniques
and the features models exploit. First, I introduce The Space of Adversarial
Strategies, a robustness evaluation framework constructed through a
decomposition and reformulation of current attacks. ...
Artificial Intelligence (AI) models now serve as core components to a range of
mature applications but remain vulnerable to a wide spectrum of attacks. Yet,
the research community has yet to develop systematic understanding of model
vulnerability. In this talk, I approach uncovering the mechanics of model
failure from two complementary perspectives: the design of attack techniques
and the features models exploit. First, I introduce The Space of Adversarial
Strategies, a robustness evaluation framework constructed through a
decomposition and reformulation of current attacks. With this, I isolate the
components that drive attack success and provide insights for future defenses.
Motivated by the widespread failure observed, I then turn to the feature space,
where I uncover differences in visual processing and the human visual system
that explain failures in AI systems. My work reveals that textures, or repeated
patterns, are a core mechanism for driving model generalization, yet are also a
primary source of vulnerability. I present new methodologies to quantify a
model’s bias toward texture, uncover learned associations between textures and
objects, and identify textures in images. With this, I find that up to 90% of
failures can be explained by mismatches in texture information, highlighting
texture as an important, yet overlooked, influence in model robustness. I
conclude by outlining future work for addressing trustworthiness issues in both
classification and generative settings, with particular attention to
(mis)alignment between biological and artificial intelligence.
Read more