Re: Machine Consciousness & Newcomb's problem

From: Hal Finney <hal.domain.name.hidden>
Date: Tue, 18 Mar 2003 12:20:29 -0800

Saibal Mitra writes:
> This sounds very strange to me. Arguably one could say that my brain is
> simulating me (where I associate myself with a particular algorithm). I
> would say that any physical process computing me has to have my
> consciousness. So, if someone starts to simulate me, there is a 50%
> probability that I will find myself in that situation. This, i.m.o.
> reasonable assumption, solves the causality paradox in Newcomb's problem.

I had made a similar argument last year on the Everything list, at
http://www.mail-archive.com/everything-list.domain.name.hidden/msg03775.html
(towards the end of the post).

An important point is that for Newcomb's problem to be a paradox,
you have to assume there is no causality from your choice to the box
contents. Resolutions like this one find ways to re-introduce causality.
Given causality, there is no longer any argument against choosing both
boxes and no paradox.

You might still imagine a Newcomb problem where the omniscient being
makes his prediction without simulating you, at least not enough to make
you conscious. Perhaps he just does a crude simulation. Or maybe he
has done the experiment many times with other people and has discovered a
correlation between the decisions people make in this situation and other
psychological traits. It's certainly conceivable that such techniques
would allow the predictor to be very accurate even without running a
conscious simulation. And in that case the paradox still holds.

Hal
Received on Tue Mar 18 2003 - 14:43:43 PST

This archive was generated by hypermail 2.3.0 : Fri Feb 16 2018 - 13:20:08 PST