Adore considers that the single most important certainty to
attain, after certainty that something exists, certainty of doubt and
certainty of differences, is

      CERTAINTY OF AGENCY.

      Agency means causal agency, who or what is cause.

      Adore defines 3 states.

      Followingness.  This means B followed A.

      Dependable followingness.  This means B (apparently!) always
follows A.

      Necessary dependable followingness.  This means B MUST follow A.

      Followingness does not imply Dependable Followingness.

      Just because B follows A once or a million times, doesn't mean B
will always follow A .

      Dependable followingness does not imply necessary dependable
followingness.

      Just because B does always follow A, doesn't mean B MUST follow A,
in fact statistically it could be a conincidence that B followed A a
trillion times in a row.

      If A is cause of B, if A is SUFFICIENT to cause B, then B MUST
follow A, and we have conditions for necessary dependable followingness,
namely AGENCY between A and B.

      Thus certainty of AGENCY is certainty of necessary dependable
followingness.

      The Proof states that machines learn about causes across a
space/time distance by looking at effects in themselves made by those
causes 'out there'.

      The Proof proves that a machine can never learn with certainty
about the causes that produced those effects.

      Since the conscious unit can learn with certainty about it's own
causal agency in some matters, it follows that the conscious unit is not
a machine.

      A machine is defined by The Proof as any system of parts
interacting via cause and effect across a space/time distance.

      A machine learns by observing changes in state here, namely in
itself, and theorizing about prior changes in state out there that might
have caused the changes in state here.

      This is 'learning across a space/time distance', and 'learining by
looking at effects' and The Proof says that certainty of agency can not
arise from this method of learning.

      A machine must always *ASSUME* that all effects are caused, all
changes in state here (effect), are preceeded by another change in state
there (cause), and this assumption can not itself be proven by merely
looking at effects here.

      Looking at effects does not provide certainty of cause!

      Causation is not sufficient to witness causation. - Jane's Law

      Therefore wherever there is certainty of agency one is not
'learning by looking at effects', nor 'learning across a space/time
distance', as the ONLY way to learn across a space/time distance IS to
learn by looking at effects.

      Certainty that there is learning, but not by looking at effects,
and not across a space/time distance, breaks the hold of the Newtonian
Model on the mind, and it becomes free to operate as a conscious unit
again rather than as a meatball always trying to look at it's own state
to determine the nature of others.

      Homer
Tue Mar 15 13:38:20 EDT 2016