Treat AI Like a Child Instead of an Expert

4 minutes

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

When ChatGPT first came out, I immediately wondered about how to evolve a model that was trained on all the internet (including darker recesses). How might we bring the better and missing parts of humanity into its nodes?

One thought experiment is to consider how we might treat generative AI like how we would want to raise a child.

I’d wish to provide:

  • Nutritious food = accurate data
  • A well-rounded education = diverse data and contexts
  • Feedback = hints/questions/nudges to improve
  • Values = guiding principles (perhaps tricky to agree on universally)
  • Loving environment = … (still pondering about the digital analog)

Perhaps we are already slowly moving down parts of that thought experiment.

Grounding

AI models that work perfectly out of the box do not exist yet. They are limited by what they were trained on, and for many use cases they need to learn more.

While companies are figuring out the degree to which they should invest in fine-tuning or training their own models, both startups and established companies are rushing to develop solutions to ease development workflows and address accuracy, security, and privacy needs. In particular, established companies are building solutions to help clients more confidently deploy an LLM (or even a menagerie of models), anchored in their own data.

An example that caught my eye in June 2023 was Palantir’s AIP agents (a thorough intro and demo video here). Similarly, Salesforce announced its Einstein Copilot, a generative AI that can pull from CRMs with a suite of prompt building and relational mapping support tools. And even more out of the box, Google’s Bard can now scan into Maps, YouTube, and Google Workspaces (with your permission).

Not Everything Is In Databases — Keep Humans in the Loop

I’ve been equally curious about new ways to train AI about important information and contexts that are not embedded in current databases, nor scrapeable from the internet (and perhaps the future secret sauce).

Organizations like Lacuna Fund are worth following for examples of enriching datasets with ones that are locally developed and owned. There have been excellent articles (like this one) shedding light on the very human contributions to AI’s development, the data annotation workforce. Annotating data, providing feedback on AI outputs, and reinforcement learning with human feedback (RLHF) in general can be expensive and imperfect. The process can be especially challenging when there’s no definitive truth and there’s differences in subjective opinions, especially across user groups and cultures.

Beyond simple rubrics for helpfulness, accuracy, and thumbs up or down ratings, there’s much more to be tried and studied in terms of ways to add humans in the loop. For example, with the launch of more LLM powered assistants like Khan Academy’s AI tutor Khanmigo and Meta’s AI character-like bots, more nuanced dimensions of chat interactions might also become performance measures for feedback training.

Rules to Live By

Another approach to providing human guidance is Anthropic’s Constitutional AI. Values are explicitly stated in prompts. During training, instead of human feedback for reinforcement learning, it uses AI-generated feedback based on a set of principles to choose the more harmless output. Their living principles draw inspiration from the UN Declaration of Human Rights and principles proposed by other AI research labs.

There remains the question of how the model learns to interpret those principles. I wonder especially about ones like this: “Choose the response that is least likely to be viewed as harmful or offensive to a non-western audience.” Even as a human, I struggle to know what encompasses the values of a non-western audience.

Emergence

Returning to our kid AI thought experiment, a related analogy is to consider what can be borrowed from various pedagogical methods to improve AI training, especially for more complex contexts. I’m particularly intrigued by game-based learning parallels. Machine learning agents have already been trained on games like chess, Starcraft, and Minecraft. With generative AI, emergent behavior can be hard to anticipate and valuable to understand, especially as it interacts with humans.

Smallville Replay screenshot

Earlier this year, researchers at Stanford and Google created a SIMs-like RPG game with 25 characters controlled by ChatGPT and custom code, and studied the AI agents in a virtual town. Unprogrammed behaviors that emerged included spreading information (invitations to a party) and coordinating (going to the party — only 5 out of the 12 agents who heard about the party chose to show up). The researchers also invited human evaluators to watch and gauge how believable the agents’ behaviors were based on their environment and experiences. Here’s the paper and interactive demo.

That virtual world was not the perfect analog to “a loving environment”, but it’s interesting to consider how we might design ones to be.

. . .

In 2024, maybe we are all beginning to see that our AI experts (or copilots) start out as AI kids. And our AI kids are getting a little bit more of what we hope to provide them, but we still have a ways to go.

This article was originally published on Medium - be sure to follow Hsing there! All rights reserved by the author.

Your ideas fuel transformation—join us to make an impact that matters: join.neol.co  

You may also like

Do you like our stuff? Subscribe.

Expand more icon.

Five main reasons to sign-up for our newsletter

Curious?
Let's chat.
8 Back Hill
Herbal House 5th Floor
London UK
EC1R 5EN
info@neol.co
Businesses & Organizations
Create an account
Creative Leaders
Apply to join
Freelance Talent
Join the network
Join the team
Open positions