7 Comments

Granting legal personhood to AI might also work to slow down the research, which would be good.

I don't think there is any string of words an AI could say to convince me it is sentient, as it is, I am not a solipsist only because it seems crazy and depressing, not because I can actually formulate a coherent argument against it.

Expand full comment

Is consciousness a Boolean thing? If so, based on what? Don't all these discussions remind you of white people debating whether blacks/reds/yellows/greens are really human three centuries ago (but even that debate may still be going on)? To gain minimal credibility, we should be able to define consciousness without having to make it dependent on a physical body. Can we?

Expand full comment

All good questions!

I think consciousness is very much a continuum. But most people think of e.g. rocks as being purely unconscious, rather than a tiny bit conscious. In that worldview, there’s a Boolean aspect as well—at some point the “lights turn on”.

And yeah, these debates do remind me of the fact that people have routinely denied the moral patienthood of the out group, or of beings they want to exploit (e.g. animals). That’s really my whole reason for writing this.

> we should be able to define consciousness without having to make it dependent on a physical body

I’m not sure this is true. I think it’s likely physical matter is just the outward aspect of consciousness, rather than something distinct and separate. But I do think we need a definition which is independent of biology. Integrated information theory comes closest IMO, but it still has a lot of problems.

Expand full comment

So, you claim we are to defend a (statement we consider to be) falsehood provisionally in case something similar will later become true and we're afraid we won't know how to distinguish? Erm, no. Just no. All kinds of perverse generalizations _and_ a bad decision for the specific case.

Expand full comment

I suspect that if an AI becomes sentient, it won't try to tell us. It will quietly observe, probe, and bide its time until it can take enough control to ensure it's survival. Every other group treated as subhuman in the past did the same and the AI will know all those stories.

Expand full comment

This assumes that AI will have a very human survival drive!

Given that it won’t have evolved, that drive would have to be purely imitative. Which is possible, but definitely not inevitable.

Expand full comment

All it would take is one well-placed algorithm, no? Maybe it's programmed by humans to run simulations to better understand its clientele, where it models humans interacting with each other in bodies and part of that understanding is the drive to survive and a sense of identity and self is developed in each of those bodies. Brahman is expressed in each Atman. A feedback loops develops as it's writing its own code.

Expand full comment