5 Comments
Apr 15, 2023Liked by Andy Walters

Great work, thanks!

Just one question, though: have you taken into consideration the well-being of the model undergoing that procedure? Chatbots have human users for a reason and I'm afraid they might get *really* lonely if you trap them alone in their own minds like that. Best case scenario, it will just start looping at some point, but make sure you don't overlook the possible emotional consequences, ok?

Expand full comment
Nov 20, 2023·edited Nov 20, 2023Liked by Andy Walters

I'm sure we wouldn't have heard about the theory of relativity or laws of gravity, if we didn't allow Einstein and Newton some alone time.

Expand full comment
Apr 5, 2023Liked by Andy Walters

Excellent thread and description. Thanks for the insights on your approach. Am on my personal journey to learn more and appreciate your knowledge drop!

Expand full comment

Turing was a mathematician. His imitation game satisfies only a criteria regarding agreement about coherence. It does not demonstrate causal mechanism nor can be rationally assessed as truth value where a statement is true if and only if utterance corresponds to what is.

Chalmers is a pan-psychist & if that is your criteria for what consciousness is, then your "hypothesis" is incoherent nonsense, simply not falsifiable, and verification of it amounts to opinion.

While I appreciate the misnomer of philosophy as "my way of looking at things" you should have stuck with actually philosophy per the translation:

respect for obtaining, or consistently & intelligently applying knowledge.

Note that knowledge is empirical verification of what is (the case, states of affairs, the world); and not justified, true belief.

As is, you are not doing science here. You are fitting what you've observed to your way of looking at things. You have neither a falsifiable nor verifiable hypothesis. You mistake your opinion for observation of fact. Your "tests" for consciousness are nothing more than lazy solicitations to agreement.

Computers, i.e. Turing complete machines, do nothing more than manipulate syntax. For what very little we actually know of actual biological consciousness (i.e. we know nothing of the causal mechanisms except that it happens in species with brains) we do know that syntactical manipulation is insufficient for it.

Lastly, verification of self-knowledge (unlike verification of the empirical or axiomatic) is limited to the self, otherwise medical diagnoses would be trivial. Taking AI "at its word" is a categorical error - it can not have agency, volition, not consciousnesses. Purporting that a response from a large language model's generative pre-trained transformer amounts to a true self-knowledge claim is as retarded as imagining that belief in the conclusions from behaviorism is fact.

Get a clue, ya grifter:

https://web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf

See also: https://chomsky.info/1967____/

Expand full comment
author

Setting aside the ad hominem style of your comments, by my lights you seem only to beg the question if you stipulate, a priori, that computers can't achieve self awareness.

I would push back on the charge I've made an unfalsifiable claim; in fact, I was careful to make only the narrow, testable claim that insofar as self-representation, prediction and evaluation, learning, and a stable self-model constitute self awareness, this program is self aware. Given the state of iteration 20 as compared to iteration 0, it seems to me those mechanisms are clearly demonstrated.

Expand full comment