See all Work
Product & Service Requirements

Building Image Recognition systems for people with prosthetics

Context

Image detection systems have enormously improved over the last couple of years. State of the art models can recognise people almost flawlessly in general use-cases.

However, not everyone has the same bodies, and it is here that image recognition systems still have a lot to learn. State of the art systems have numerous problems with recognising people wearing prosthetics, often referring to these prosthetics with rather denigrating terms, such as “chair” or “toilet”.

This case study aims to make product and data science teams more aware of this bias and offers insights into making these systems more inclusive of people wearing prosthetics.

Approach

Together with the Amputee Care Center by Spronken in Genk, Belgium, we invited four people with prosthetics, one occupational therapist, and the coordinator of the center to explore the results of a state-of-the-art image recognition system on prosthetics.

The goal was to understand in which ways the system interprets their prosthetics, what the impact of these interpretations on participants can be, and how they would like the system to change to be more inclusive of them. Methods included group interviews, direct observations, and prototyping with Mask2Former.

Findings

Absurd labels for prosthetics

We quickly observed that the image recognition system often labels people wearing prosthetics in absurd ways — naming a prosthetic arm as a “bottle”, a prosthetic leg as a “chair”, or ignoring the prosthetic entirely by subsuming it under the label “person”.

Though there were differences in how participants would like their prosthetics to be named specifically, they all agreed the prosthetics need a label of their own. One participant suggested using vernacular names depending on region and culture.

Societal ignorance

One participant noted that these misrepresentations are not just a technical problem, but also a wider societal issue. While social acceptance of prosthetics has increased enormously over the last decades, ignorance remains. It is not unexpected that an image recognition system built from that society reflects the same blind spots.

Diversity of prosthetics

There is a wide variety of prosthetics in terms of body part, material, texture, colour, and decoration. A prosthetic arm with a Caucasian skin-like sleeve produces very different recognition results than an exposed metal one — both from the same person. This diversity makes a one-size-fits-all labelling approach fundamentally inadequate.

Impact of mislabelling

For long-time prosthetic wearers, absurd labels were often met with humour and perspective. The centre’s coordinator, however, pointed out that people who have just received a prosthetic — for instance after an accident — have not yet made peace with their new reality. For them, labels like “chair” or “toilet” can be deeply confrontational and hinder the process of acceptance.

Someone is going to need to take a lot more pictures.

— A participant, on seeing his prosthetic leg labeled as “toilet”

Conclusion: what data scientists can learn

The need for diverse data science teams

The misrepresentations discussed here stem partly from a society not yet fully embracing prosthetics. Even diverse data science teams are formed from the fabric of that same society. Knowledge alone is not sufficient — what’s needed is empathy and familiarity. Team members who have intimate familiarity with prosthetics bring a significant and irreplaceable advantage.

Perhaps this issue is not solely one of knowledge, but more so one of empathy and familiarity.

The non-uniformity of people wearing prosthetics

People wearing prosthetics are not a uniform group. They have very different preferences for how they want to be represented. Data science teams should abandon the idea of building for a single, homogeneous group — and instead involve people from this community directly in the development process, accounting for the nuance within it.

The impact of misrepresenting prosthetics

Data science teams need to be aware of the emotional weight wrong labels can carry. As image recognition becomes more deeply integrated into products and services, the stakes of mislabelling will only increase. Building inclusively here is not a nice-to-have — it is a responsibility.

Have a product vision that needs testing?

We work with product leaders who need to understand whether their direction holds up in the real world — before they commit to building it.