notes on human-ai interaction

Jan 19, 2025

generated by NovelAI

overview

Collection of my various revelations as I study human-AI interaction. My goal is to answer the questions: "How can we design AI interaction to enhance human capability?" I'm convinced machine learning can positively alter human experience. It isn't just something to mathematically optimize — which seems too brutally logical.

The way Josh Lovejoy (Principal UX @ Google PAIR Lab) puts it:

The role of AI shouldn’t be to find the needle in the haystack for us, but to show us how much hay it can clear so we can better see the needle ourselves.

the ux

Machine Learning is a black box. Understanding how a user interacts with an unbounded system is more ambiguous. Here are some notes of me trying to dissect and propose various solutions to this problem:

  • Experiential framework: Familiarity is king. Attach existing mental models. User feels in control and owns the system, not the other way around.

ml interpretibility

Interpretability is the most important problem in interaction. How does it think? What are the inputs and outputs? How does it make decisions? Traditionally, this was all stored neatly in a black box. Here's some insights I've gathered on explainable ai:

  • Tilde Research is building interpreter models at every step of the TRAINING process instead of after. i.e. in a sense like "I love playing violin", you can ascribe the ai's thoughts to each word
  • Sparse Encoders extract the most semantically relevant features from a huge amount of data and storing it in a vector space.

misc & analogy

On the surface, HAII seems abstract. Progress is not defined with the same finite achievements like launching a rocket, and thus requires a new definition. Here are some analogies to help understand the problem:

  • Frankenstein situation: When the creator, obsessed with scientific advancement, failed to consider how to interact with his creation.
  • Computational orientation: Bias for technological progress (ex: inference time, cost function optimization, etc.) ignores the human experience.
    • people that can reason beyond pure logic often study humanities and social sciences instead of pure engineering.