đ Hi, welcome to Tillâs Newsletter, a weekly column all about AI for the AI-avoidant. Letâs learn together.
5 Min Read
Happy Thursday folks :) In todayâs newsletter âŚ
đđźââď¸ Human-centered design: Building for when shit hits the fan.
đŤ Youâre not a bonehead, some things are just hard to use.
âźď¸ Human-centered AI and the âblack-box effectâ.
âWhen we collaborate with machines, it is people who must do all the accommodation. Why shouldnât the machine be more friendly?â
â Don Norman, The Design of Everyday Things
Last week, I wrote to you about how we talk to AI. We went into prompting techniques â specific ways to shape and reshape your requests so you can make yourself clear to the AI youâre working with.
But that begs the question: Is the onus of being understood really always on us?
Weâre told that in any good relationship, communication has to be a two-way street. So what are we doing to make sure that the technology built to work alongside humanity is built ⌠with humanity in mind?
Around this time last year, I picked up The Design of Everyday Things by Don Norman â a man Iâd never heard of, but who would quickly make me rethink my relationship to the tech I use everyday.
*Don Norman, by the way, is a design legend ânow busily engaged in his 5th retirement,â according to his LinkedIn profile. Like I said, a legend.
In his book, Norman advocates for Human Centered Design (HCD), which Iâll take a stab at defining in my own words here.
HCD is tailored to real life, taking into account how people actually do something vs. how they âshouldâ be doing it, all in an effort to design things real people love to use.
Itâs design with the ever-present acknowledgment that there isnât always a correlation between the best solution in theory and the best solution in practice â often because people will always do what they do best: behave inconsistently, make mistakes, and dump the instruction manual.
âď¸ A simple example: This cup with a quirky handle, made by a guy I met at a networking event recently. After deciding that mug handles werenât made with human hands in mind, he got to work and reimagined what the experience of holding a mug could be like.
Next time technologyâs got you stuck, try giving yourself the benefit of the doubt.
Canât figure out how to turn that new device on? Maybe the âonâ button is just stupidly hard to find.
Did you have to watch 2 1/2 YouTube videos to figure out how to use that new feature? Well maybe the feature wasnât intuitive.
Maybe youâre not a (complete) bonehead. Maybe the thing youâre struggling with just needed to be built more intuitively.
âThe idea that a person is at fault when something goes wrong is deeply entrenched in society ⌠But in my experience, human error usually is a result of poor design: it should be called system error.â
â Don Norman, The Design of Everyday Things
To understand how to work alongside AI, weâve got to overcome âthe blackbox effectâ.
Human-Centered AI (HCAI) is the AI-world equivalent of HCD, and itâs more important than ever today.
In his 2019 article on HCAI, author Wei Xu describes the incomprehensibility and mystery of AI to the general public as the âblack-box effectâ.
Much like the rise of computers in the 80s, Xu writes, AI experts started off designing only for other experts in mind, creating a wide-spread accessibility problem for the technology.

Especially in early days, lack of transparency is a big issue for technology as impactful as the computer, let alone AI.
Itâs in the early days that foundations are laid, and the answer to the question of who is laying those foundations â and with who in mind â can have an outsized impact.
In the case of AI, the impact of that opaqueness in the industry can manifest as racial and gender biases in AI face-recognition technology or, as Xu illustrates, a medical-diagnostic AI indicating that a patient may be predisposed to cancer without actually being able to explain to either the patient or the doctor how it got to that concerning conclusion (imagine how frustrating that would be ! ).
AI canât just be compatible, it has to be companionable.
âIâm interested in when and where and if AI might just be more than compatible but maybe actively companionable ; might extend and elevate creativity the way a pen, or a paintbrush, or a musical instrument can.â
â Michele Elam, HAI Associate Director at Stanford HAI.
The challenge with HCAI is that AI canât just be compatible with humanity, it has to act as a well-meaning and effective companion as well. AI systems have the potential to automate many processes now carried out by humans, making their roles obsolete, something both Xu and IBM agree canât happen.
AI experts at IBM believe that HCAI-focused collaboration between AI and humans should free the human up for high-level creativity, goal setting, and steering the project, while AI can augment the abilities of a human being to pitch in creatively, scale potential solutions, and handle low-level detail work.
Like I wrote in my first newsletter, the hope of AI is that it becomes a collaborator, not a replacement. To do so, weâve got to understand one another, and that road runs two ways.