Temperature reconstructions using proxies became an important public representation of climate science in the IPCC’s Third Assessment Report and have subsequently been the focus of a surprising amount of controversy.
These reconstructions have all been made using more-or-less “homemade” methods without connecting the methods to statistical literature as practiced outside paleoclimate and without a deep understanding of the statistical properties of the methods in the context of actual paleoclimate data sets. Few problems arise with any reconstruction method when applied to “tame” data sets i.e. networks where all or nearly all of the “proxies” actually are proxies for temperature. However, little to no attention has been paid to the statistical problems arising from “pathological” data sets, in particular, networks that are mostly random, but with a very small number of “magic” proxies or arising from inconsistent data. I will show how these issues connect to the visible problems that dog the field – non-robustness, spurious correlation, failed verification statistics, data snooping and “hide the decline”. I will also discuss what steps are necessary to move the field forward.
Visit Coordinator: John Christy (UAH)