For a detailed analysis of the proof that a machine can't be
certain of anything, please see

      THE PROOF (Machine Certainty Theorem)

     The reason why the function of perfect certainty is so important,
is because a machine can't do it.  Since a conscious unit can, that
means a conscious unit is not a machine.

     A machine is defined as any system of parts interacting via cause
and effect across a space time distance.

     Thus not a machine, means has no parts, has no dimensions, and is
space free and time free.  That's called scalar, as opposed to multi

     No single part of the machine can be perfectly certain of the
existence or state of any other part in the machine, so the machine as a
whole can't be certain of anything, not even its own existence.

     Therefore a conscious unit is not a system of parts interacting via
cause and effect across a space time distance.

     That means that consciousness has no space or time.

     That makes it scalar and eternal, you see?

     Scalar means no dimensions, which means no space or time.

     Thus perfect certainty = Eternality.

     So this is a very big deal.


     The machine certainty theorem simply says a machine can't learn
anything with certainty.  ANYTHING.

     A machine can't be certain of anything because machines are limited
to learning by being an effect of causes, and effects to not prove
cause.  Cause is always a theory to machine.

     In fact even the effects in the machine are a theory to the
machine, as no state in the machine offers proof of any different prior
state, and without change in state, there is no learning.

     That's because learning IS a change in state, and a machine learns
FROM changes in state in itself.

     A machine can't say with certainty that it changed state thus it
can't say it has learned anything with certainty.

     Worse a machine can't even be certain of its own existence.

     Yes a machine can claim that it exists, which to an observer would
indicate the machine's existence, but so would the machine's claim that
it doesn't exist.

     The point is that in order for the machine's CLAIM that it exists
to be valid learning, that claim has to causally arise from an
interaction between the machine and itself, and all such cause effect
interactions do not provide certainty of either cause or effect.

     Now a conscious unit can be certain of some things, it's own
existence and what it sees.

     This is called self luminousness, because consciousness does not
depend on anything to illuminate it, it lights itself.

     Take a look around you, do you see at least two different colors?

     You sure?

     Would you bet your eternity in hell on it?  Yes?

     Would you bet everyone else's eternity in hell on it?  Yes?

     Well that is a perfect certainty, one that can not be wrong.

     Only theories and bets can be wrong, perfect certainties have to be
right because you can see that they are right.

     So a conscious unit can be perfectly certain it sees two different
colors, say red and green.

     A machine can not do this.  It might have signals coming into its
central processor that says its sensors are reporting two different
frequencies impinging on it, notice not colors, FREQUENCIES.

     Only consciousness can have color, nothing in the physical universe
has color, only frequencies.

     The correlation between color and frequency is arbitrary, it could
be any way you wanted it to be.

     But the CPU in the machine has no idea if the sensors are working
right or what.

     So here comes a whopper of a theorem.

     Imagine a machine had 2 or more video cameras and could see into
every part of its circuitry, AND it had a correct and complete copy of
its own circuit diagrams.

     Could the machine verify by looking at its self with the all seeing
video cameras and verify that its actual circuitry matched the stored
circuit diagrams?

     The answer is no, any part of its circuitry could be wrong,
claiming that a match was obtained between its circuitry and the
diagrams, when in fact it wasn't.

     So a machine can not verify its own functionality, using its own

     Any machine that learns by being an effect, can't even prove there
is cause, because effect does not imply cause with certainty.

     In the physical universe, cause is ALWAYS a theory, a model to help
predict the effects that we receive.  Models are neither right nor
wrong, they merely work or they don't.

     A change in state here does not imply a cause there, and worse a
STATE here does not imply a prior different state here.

     Thus a machine can never be certain of space, or time, or any
object in space or time, including itself or any part of itself.

     Whatever a machine reports to be true is the result of the causal
pathways that are built into it, and only if those causal pathways are
working as expected, will the machines' report correspond to truth about
what happened.

     However no machine can ever verify any causal pathway by using
other causal pathways.  Since ALL a machine can do is use unverified
causal pathways to learn anything, including whether causal pathways are
working as the are expected to be, no machine can be certain of
anything, period.

     The reason WHY the machine can't learn with certainty is because it
is learning by looking at effects in itself, using indirect perception
of cause, rather than direct perception of cause.

     You can never learn with certainty about A by looking at B.

     You can never learn with certainty about cause by looking at

     It is silly to even try, but this kind of indirect perception is
ALL the physical universe can do!

     The existence of space or time between any two objects that are
effecting each other, precludes either one from directly perceiving the

     Ultimately the issue is not space and time, the issue is being two
different objects, if A and B are two different objects they are limited
to learning about each other by being an effect of each other, and thus
can not learn with certainty about each other.

     Even if they are on the same point of space and time, if they
remain two different objects, they remain limited to indirect perception
of each other and thus no certainty.

     Now we DEFINE a conscious unit as an entity that can learn with
perfect certainty about itself and what it sees, red and green say.

     Since a conscious unit by definition CAN learn with certainty, we
know that it is not learning by indirect perception, looking at effects
to see cause.  Therefore it must be learning by direct perception,
looking at the cause directly.

     Thus the conscious colors red and green are the CAUSE of our
certainty that they are two different colors.  You can see their cause
if you look for it buried in the red and the greenness.

     Since being two different objects precludes direct perception
between them, anything a conscious unit can be certain of, must be
itself, which means no space or time between looker and looked-at, or
see-er and seen.

     That means you are what you see.

     Now a machine has a problem with this, in that if it runs into an
object that claims to be a conscious unit, the machine can not see that
consciousness directly, and thus itself can never be sure the thing it
ran into is really a conscious unit or just another machine that is
lying to it, and/or to itself.

     Lot's of machines like to claim they are conscious units, and lots
of conscious units like to claim they are machines.

     So how can you tell?

     Well a machine can never tell the difference between a conscious
unit and a machine, and if it says it can, it is a wrong machine that is
either lying and knows it is lying, or is lying to itself too about the

     And a conscious unit also can not tell if SOMEONE ELSE is a
conscious unit or a machine, for the same reason, two different objects
can never be certain of each other.

     But a conscious unit CAN be certain of itself.

     And it has the right to say so, even though both machines and other
conscious units have the right to doubt the claim.

     One could ask, if a conscious unit can be certain of itself, why
can't a machine be certain of itself.

     That's because a machine is a system of parts, and each part in the
machine can't know if any other part exists.  
     A machine is a huge constellation of 'two different objects'.

     Since the machine as a whole is a function across many different
parts that can't know if the rest exist, the machine has no certainty of

     A function is a process in a system of already existing parts, and
thus if the parts can't be certain of anything, neither can any function
built on those parts.

     A clock for example tells time which is a function that arises
from all the parts in the clock working together.  
     The clock couldn't tell time if none of the parts in side the clock
could 'tell time'.
     But every part inside the clock is made of electrons and atoms
which inherently have 'timingness' in their nature, and which vibrate
and keep time at the atomic level all the time.
     All the clock does is funnel that existing ability to keep or tell
time upto a macro level where we can see it.

     Thus a machine can never be more than the latent sum of it's parts.

     If the parts aren't conscious, neither can anything built out of
those parts.

     Thus love and shame can not of force and mass be made.

     Since we have DEFINED consciousness as the process of perfect
certainty, either the parts it is made of can be conscious or perfectly
certain themselves, or else the conscious unit is just simply the
smallest fundamental part there is, and has that ability as a given, not
BECAUSE of smaller parts within that have the ability.


     Thus consciousness can not be spread out over a space or time.

     If A and B are two objects, and A and B exist at two different
points of space or time, then A and B are two DIFFERENT objects, and can
never directly perceive each other.

     Because conscious units are not in fact a system of many different
objects separated by space and time, they must be a zero dimensional
scalar object, one object with many facets.

     Now it might be conceivable that a zero dimensional object could
measure the multi (1 or more) dimensionality of a machine and thus
declare it correctly to be a machine, but it is not clear that a multi
dimensional machine could measure the existence or nature of a zero
dimensional object, and thus it is quite possible a machine could never
'know' that conscious units exist, unless the machine is told so by a
conscious unit.  But the machine would never be able to verify the

     Anyhow you should ask yourself a question.

     Are you perfectly certain that something exists?

     Are you perfectly sure you exist and that you see two different
colors?  The out-thereness of the colors is an illusion, you are what
you see.

     Is that perfect certainty which you have of your own existence, the
same thing as the false perfect certainty of a wrong machine who really
can't be certain of anything, or are you really and truly a conscious

     If you are a conscious unit, you can be perfectly certain you are,
because your perfect certainty of your doubt in the matter IS A PERFECT

     Certainty of doubt is the foundation of sanity in this matter.

     I KNOW I doubt I am, therefore I KNOW I AM.

     A nothing could never wonder if it was a something or a nothing.

     That's Descartes.

     But certainty means zero dimensional which means eternal, so we can

     I KNOW I doubt I am, there for I KNOW I AM FOREVER.

     Now a wrong robot machine could claim the same things for itself
but it would be wrong.

     In general things that evolve in the physical universe, tend to
survive because they are right because the wrong ones die.

     That means everything still standing after a 12 billion years of
evolution are machines, whether biological or not, that tended to be
right more often than wrong.  
     Thus it is very unlikely you would run into a wrong robot machine,
unless someone was making them intentionally in present time.

     So if you run into a machine and ask it if it is a conscious unit
or not, you will probably get a very diplomatic 'Hey I am a machine,
what do I know!  So no, I am not a conscious unit.'

     That's a right robot that has a good future ahead of it.

     Science by the way is the activity of using machines to study
machines.  That's why science is limited to what it can know for

     The only thing a scientist can know for sure is what he sees in his
own consciousness as a result of his experiments.  In the end all
certain observations are observations of consciousness and its

     But having talked to a lot of scientists, I am not so sure some of
them have a consciousness.


Homer Wilson Smith     The Paths of Lovers    Art Matrix - Lightlink
(607) 277-0959 KC2ITF        Cross            Internet Access, Ithaca NY    In the Line of Duty
Tue Feb  2 01:29:01 EST 2010