Jan 19, 2025
generated by NovelAI
Collection of my various revelations as I study human-AI interaction.
My goal is to answer the questions: "How can we design AI interaction to enhance human capability?"
I'm convinced machine learning can positively alter human experience. It isn't
just something to mathematically optimize — which seems too brutally logical.
The way Josh Lovejoy (Principal UX @ Google PAIR Lab) puts it:
The role of AI shouldn’t be to find the needle in the haystack for us, but to show us how much hay it can clear so we can better see the needle ourselves.
Machine Learning is a black box. Understanding how a user interacts with an unbounded system is more ambiguous. Here are some notes of me trying to dissect and propose various solutions to this problem:
Interpretability is the most important problem in interaction. How does it think? What are the inputs and outputs? How does it make decisions? Traditionally, this was all stored neatly in a black box. Here's some insights I've gathered on explainable ai:
On the surface, HAII seems abstract. Progress is not defined with the same finite achievements like launching a rocket, and thus requires a new definition. Here are some analogies to help understand the problem: