31 Comments
User's avatar
Alex Tolley's avatar

The "AIs' that we use are primarily trained on a corpus of English language texts. I have read that because the training sets are very similar, the responses of various models is rather similar. I wonder how much of our culture becomes evident in the AI. If we train an AI on e.g., only Chinese texts, would such an AI reflect more of a Chinese cultural response? I would argue that as we are embedded in our culture, so must AIs be embedded in their training corpus. Could being more selective in the training set be used to tailor the responses of an AI to a desired cultural model?

Expand full comment
Jessica Nordell's avatar

Very interesting-- yes, they are in a sense "embedded" in culture insofar as they are trained on culture... but that's fascinating. If an AI was trained on an empathy-based cultural canon, how would it be different?

Expand full comment
Jo Jern's avatar

I clicked for the "cheese-powered baby" and earned a thought-provoking read. Thank you.

Expand full comment
Jessica Nordell's avatar

Ha, so glad you enjoyed it, Jo!

Expand full comment
Sarah Ann Gilbert (she/her)'s avatar

Thank you for this interview. It’s so interesting. I can’t wait to take this class and read this book. Plus the associated links and articles. I’m really interested in these questions.

Expand full comment
Jessica Nordell's avatar

Glad to hear, Sarah Ann!

Expand full comment
Karen Salmansohn's avatar

As someone who loves to write so I can ponder and think and feel the joy of creating (not just to be read), I cheered at the reminder that understanding isn’t always fast or flashy. Loved this interview!

Expand full comment
Jessica Nordell's avatar

Thank you, Karen. And YES to to feeling the joy of creating. <3

Expand full comment
Rafael Tesoro's avatar

Many thanks for this interview!! I’m a fan of Melanie Mitchell’s work and have particularly enjoyed and learnt a lot reading her book on AI and listening to her podcast series on the topic.

I found Jessica’s questions great and covering many relevant aspects and the answers did not disappoint me :):).

Expand full comment
Jessica Nordell's avatar

Glad you enjoyed it, Rafael!

Expand full comment
Benta Kamau's avatar

This was such a thoughtful and grounded take especially in how it frames intelligence as not just output accuracy but experiential embodiment.

What stood out most for me was how clearly this piece surfaces something we often overlook in AI discussions, that intelligence is context-dependent, layered, and biologically bounded in ways models simply don’t replicate - yet.

If anything, the piece made me reflect on a subtle next layer, while we’re rightly awed by babies outperforming machines in certain areas, I think the more urgent question is how do we embed that biological wisdom into the systems we build, not to mimic, but to respect the limits of what tech should do?

That gap between imitation and alignment feels like a space we need more frameworks in and I’m encouraged that conversations like this help open that up.

Expand full comment
Jessica Nordell's avatar

Thank you for your note, Benta! Yes- the fact that intelligence is context-dependent and biologically rooted is so fascinating. I also wonder how we might actually bring biological wisdom to bear on machine systems.

Expand full comment
David Crouch's avatar

Comprehensive and yet said intelligibly without losing meaning or impact. Thanks

Expand full comment
Jessica Nordell's avatar

You’re welcome, thanks for your note!

Expand full comment
Thomas Hutt's avatar

Thanks for this. I've been focusing on AI recently in my own Substack and really appreciated Melanie's insights. I hadn't thought about the open-source thing like that before. And like you, it strikes me as "bizarre" that people think human-level intelligence is possible without a human body to go with it. Yet it's seriously discussed and attracting many billions of dollars of investment. Has the world gone completely mad? BTW, as a writer, I also must say I appreciate your image of the "new houseguest, who was not exactly invited but..." Great description. It really does feel like that.

Expand full comment
Jessica Nordell's avatar

Thank you, Thomas! Glad you enjoyed it. :)

Expand full comment
Stephen Schiff's avatar

As a physicist and visual artist I share the conflicting emotions of both groups. As a scientific tool, Machine Learning, which is in reality an extension of statistics, is focused in its application and involves a training and validation process that allows us to estimate errors. The LLMs of so-called AI are and will continue to be error prone in unpredictable ways as long as the current paradigm is followed.

In the former case the motivation is to assist in the quest for knowledge while in the latter it's all about money and the fragile egos of the "tech" oligarcy

Expand full comment
Jessica Nordell's avatar

Interesting, thank you for your comment, Stephen.

Expand full comment
Kevin's avatar

I feel like ChatGPT is already more empathetic than I am!

Often a coworker asks me a technical question and I'm just like, I'm busy, I'm tired, it's difficult, I can't help you right now. Try asking ChatGPT. And ChatGPT is always, "oh I'm eager to help."

Expand full comment
Jessica Nordell's avatar

Ha! I guess the deeper question is, is it real empathy if it's behavioral only?

Expand full comment
Kevin's avatar

As long as ChatGPT feels sorry for my coworker and helps fix the printer, instead of me having to do it, I'm happy to leave that question to the philosophers :P

Expand full comment
jazzbox35's avatar

Some theories of "understanding" have prediction as a crude level of understanding, if you allow for definitions of understanding that are multifaceted.

Expand full comment
Jessica Nordell's avatar

Interesting! One dictionary definition of "understand" is to "perceive the significance, explanation, or cause of." And to "perceive" is "to become conscious of." So: can machines understand without consciousness?

Expand full comment
jazzbox35's avatar

This paper is really good. https://alumni.media.mit.edu/~kris/ftp/AGI16_understanding.pdf

As far as the connection to consciousness, personally I think a machine can understand without consciousness, at least in some sense.

Expand full comment
jazzbox35's avatar

A few years ago I did a survey of "understanding" for my meetup. Here's the link to a short summary. I'd recommend clicking on the link at bottom to the works of Dr Thorisson at the very end. He has some of the very few theories of understanding around. https://drive.google.com/file/d/1-tu91TRMa0ybsdNk3HSC82LugEFPGPnD/view?usp=drive_link

Expand full comment
Jessica Nordell's avatar

Thank you!

Expand full comment
Joe Van Steen's avatar

Thank you! It always great to hear Melanie's intelligent analysis on a subject often filled with thoughtless babble. From my perspective, one major failing with the conversation on AI is linear thinking. Human intelligence is not linear. Pattern extrapolation is linear. Machines can do a lot of linear activities well beyond the capabilities of a person. But, being human isn't a linear exercise. The real challenge for the future is learning to harness the linear competence of machines at solving abstract problems as we have learned to use the linear competence of industrial machines to solve mechanical problems. The machines themselves can't and won't do that. Multidimensional thinking humans can and hopefully will.

Expand full comment
Jessica Nordell's avatar

Joe, that's a fascinating point. Yes, the results of our human intelligence-related processes seem much more emergent than linear, especially when we consider all the ways our body signals influence our thinking. But will we fully appreciate the difference in order to make the best use of machines? I don't know. In response to Melanie's age-old question about whether we are just very complex machines of a kind, I believe we are qualitatively different from machines.

And I agree-- Melanie is such a voice of reason!

Expand full comment
Georgia Patrick's avatar

Terrific interview. Great lede and story. Thank you for introducing Melanie Mitchell. With so much coverage on AI and technology, I have to pause many writers to emphasize there are more than 1,500 professions identified by the Bureau of Labor and only a handful, as in 5, are technology professions. The rest of us may be AI users, but we are not programmers or know squat about the terminology in those articles. LLM? What's that? Claude? Wasn't he in a rock band the summer of 1969?

Expand full comment
Peter Gaffney's avatar

I'm intrigued by the insight that the AI we have today is very much a reflection of the capitalist system that produced it. Now I'm wondering what AI might look like if it came from a radically different human society.

Expand full comment
Cyrus Dolph's avatar

Ha, came across this just after reading a post about how it is now time to build and deliver AI at scale in the gov't. Good read. And Ted Chiang's essay was worth the click. I hope he gets back to writing some more awesome sci fi though :-)

Expand full comment