Text and Explanations for Y2K Analyses, Part 2, Details

Some of the analyses for the Global Consciousness Project are quite complex, and a number of explorations have been done. We present here further explanatory texts to supplement the brief descriptions in the Y2K results pages, and to document some of the exploratory analyses.

Epoch Analysis and Odds Ratios

Dean Radin's first analytical examination of the Y2K data is summarized in two figures, one that shows the median absolute raw deviations for blocks of data centered on midnight in each time zone, and one based on these values converted to Z-scores and ultimately to an odds ratio, which is plotted against time. These analyses have been replaced, but remain of interest as one of the early steps toward an effective approach. The detailed description of the steps in Dean's analysis is both informative and interesting, and provides some insight into the search for an incisive strategy.


Explanation of my superposed epoch analysis: (Jan 2, 2000)

Before examining the data, I presumed (1) that the turn of the millennium would produce 
a few moments of high mental coherence that would be reflected in each time zone, (2) 
that mental coherence may be reflected in physical systems as a reduction in noise or 
entropy, (3) that there may be 24 periods of reduced entropy in all eggs, centered
around the stroke of midnight, and (4) reduced entropy can be detected as a 
reduction in variance among the egg values.

To look for these reductions in entropy, I took the following steps:

a) Download all per-second raw egg data from 12/31/1999 11:30 to 01/01/2000 11:30.

b) Calculate the average absolute deviation (AAD) for these raw values, per second,
across all available eggs.  The AAD values will be our measure of "noise."

c) Create 24 one-hour blocks of AAD values, each block centered on the stroke of 
midnight in each time zone. 

d) Create a superposed epoch analysis by overlaying the 24 blocks of data.

e) Calculate the median for each of the 3600 seconds of blocked AAD data, call these
values MAAD.

f) Calculate a 5-minute moving average for the MAAD values.  Call this average
5M-MAAD.  The results are graphed in Figure 1.  The prediction is a drop in "noise" 
around the stroke of midnight, and we see that 5M-MAAD does drop.
The lowest MAAD value occurs 15 seconds after midnight.

g) Calculate a standard error for each 5M-MAAD value.  One standard error bars 
are shown in Figure 1.

h) Find the grand mean of 5M-MAAD values from Figure 1, then create standard 
normal deviates (z scores) based on this mean and the observed 5M-MAAD averages 
and standard errors determined in steps f and g.  From the resulting z-scores, 
create odds against chance for the graph in Figure 1.  These values are 
plotted in Figure 2 (one-tailed).  It is clear that something unusual occurred at the 
stroke of midnight in all of the time zones combined.

NOTE: Instead of using AAD as a measure of variance among the egg values, 
one could use standard deviation.  And instead of taking the median of 
the AAD values, one could use average.  As expected, the results vary somewhat depending 
on what statistic one chooses.  With some experimentation I found that AAD and median 
optimized the final graphs, and thus these particular stats were selected post-hoc 
to enhance the resulting spike at the stroke of midnight.  But the basic approach used
here was planned in advance of examining the data.

Low vs High Population Time Zones

One of the analyses suggested by the previous new year's data was a separation of data according to the population of the time zones. The following is Radin's detailed description of an extension of the previous analysis (Jan 9, 2000), looking at the effect of population density, and by implication, the amount of attention and celebration that would occur in different time zones.

Here's my latest analysis, exactly as I did before, only now split by high population 
(HP) vs. low population (LP) time zones.  The hypothesis is that all eggs would respond
to the stroke-of-midnight moment of coherence, but there would be different "amounts" of
coherence created as each timezone passed midnight, given that the world's population is 
not distributed uniformly.

I've defined LP zones as -12, -11, -10, -9, -2, -1, +4, +6, +7, +11 based on 
examination of the world timezone map (www.worldtimezone.com) as compared to the world
population in different countries, which I estimated through examination of the 
US Census web site (that site has an extensive international population database).

Figure 1 shows the average median absolute deviation curves for the HP and LP,
and Figure 2 shows the one-tailed odds against chance for the z score of the
difference between the two curves.  I've used a one-tailed test because I assume that 
the HP curve would drop below the LP curve at the stroke of midnight, reflecting a greater
negentropic change for the HP time zones.  The graph shows that the largest drop, 
and highest odds against chance, occurs 9 seconds before midnight. 

Ed May has brought to my attention that the two eggs in India are in time zones 
that run on-the-half-hour with respect to GMT.  I have not adjusted this 
analysis for these those eggs.

Corrected Analysis

Since the preliminary analysis on 2 January, 2000, we have identified a conceptual error, making the analysis centric to the GMT (UTC) time zone. Although the result showed a striking spike at midnight, it was not properly representative of Dean's original prediction. A corrected analysis addressing the intended question was completed on 23 January. This analysis has been thoroughly cross-checked, with the cooperative oversight of an independent observer, Ed May, and includes comparisons with the results of the exact same analysis applied to data from 1, 2, and 15 days after the Y2K rollover. Dean describes the new analysis, and discusses the impact of the exploratory mode, including the problem of multiple analyses, in the email accompanying the figures.


Subject: re-tested Y2K analysis

Date: Sun, 23 Jan 2000 01:43:39 -0800 Attached are two pictures of my latest Y2K analysis. Performed from scratch, with freshly downloaded data from Y2K. You'll see results for Y2K along with identical analyses for Y2K+1 day, +2 days and +15 days. The obvious graphical results are confirmed by permutation analysis, in which I randomized the per-second time sequence 1,000 times. The new analysis, using all eggs across all time zones, is very significant within a few seconds of midnight. Odds are above 80,000 to 1, 2-tail, as you see in the odds graph. Before I tell you how I calculated these graphs (they are similar to before, using a sliding window), the discussion Ed and I have been having about this have sparked an interesting issue about how to interpret multiple analyses in exploratory mode. If I try say, 10 different analyses looking for ways to optimize a spike at Y2K, and the final analysis shows odds of 80K to 1 at the moment of interest (i.e., a spike that peaks a few seconds from midnight), then a Bonferroni correction will still result in a healthy significance. E.g., 80,000 / 10 = odds of 8,000 to 1. But if I ran 10,000 analyses to find this spike, then after a Bonferroni correction that result would be null. Ok then. In this case I ran 10 different variations of my previous analysis to find this spike. I basically followed some hunches, and I quickly found the results you see in the graph. The observed spike magnitude, combined with a spike time as close to midnight as observed, is over 4 sigma from chance, according to both permutation analysis and by comparing against the point means and standard errors from the epoch curve. You don't get anything like this using identical analyses applied to data from days +1, +2 and +15 from Y2K. So, is the new analysis meaningful, or not? I think it is, because regardless of the complexity of the method used to achieve these graphs, it just shouldn't be so easy to get such a good result by chance. But maybe Ed is right, and I'm just a good analyzer? If that's true, then virtually all psi experiments may simply reflect how good the analyst is. Come to think of it, if psi-type information flow is in fact like I suspect it is, then this sort of anomalous analysis is analogous to clairvoyance, in which rather than randomly poking about an infinite analysis space, I can somehow jump into that abstract realm and select the right path to take. Oh, this is becoming too complex for my brain this late at night, so I'm going to sleep now!

GCP Home