Assessing the Quality of Earthquake Catalogues: Estimating the Magnitude of Completeness and Its Uncertainty
by Jochen Woessner and Stefan Wiemer
Abstract We introduce a new method to determine the magnitude of completeness Mc and its uncertainty. Our method models the entiremagnitude range (EMR method) consisting of the self-similar complete part of the frequency-magnitude distribution and the incomplete portion, thus providing a comprehensive seismicity model. We compare the EMR method with three existing techniques, ﬁnding that EMR shows a superior performance when applied to synthetic test cases or real data from regional and global earthquake catalogues. Thismethod, however, is also the most computationally intensive. Accurate knowledge of Mc is essential for many seismicity-based studies, and particularly for mapping out seismicity parameters such as the b-value of the Gutenberg-Richter relationship. By explicitly computing the uncertainties in Mc using a bootstrap approach, we show that uncertainties in b-values are larger than traditionally assumed,especially when considering small sample sizes. As examples, we investigated temporal variations of Mc for the 1992 Landers aftershock sequence and found that it was underestimated on average by 0.2 with former techniques. Mapping Mc on a global scale, Mc reveals considerable spatial variations for the Harvard Centroid Moment Tensor (CMT) (5.3 Mc 6.0) and the International Seismological Centre(ISC) catalogue (4.3 Mc 5.0).
Earthquake catalogues are one of the most important products of seismology. They provide a comprehensive database useful for numerous studies related to seismotectonics, seismicity, earthquake physics, and hazard analysis. A critical issue to be addressed before any scientiﬁc analysis is to assess the quality, consistency, and homogeneity of the data. Anyearthquake catalogue is the result of signals recorded on a complex, spatially and temporally heterogeneous network of seismometers, and processed by humans using a variety of software and assumptions. Consequently, the resulting seismicity record is far from being calibrated, in the sense of a laboratory physical experiment. Thus, even the best earthquake catalogues are heterogeneous andinconsistent in space and time because of networks’ limitations to detect signals, and are likely to show as many man-made changes in reporting as natural ones (Habermann, 1987; Habermann, 1991; Habermann and Creamer, 1994; Zuniga and Wiemer, 1999). Unraveling and understanding this complex fabric is a challenging yet essential task. In this study, we address one speciﬁc aspect of quality control: theassessment of the magnitude of completeness, Mc, which is deﬁned as the lowest magnitude at which 100% of the events in a space–time volume are detected (Rydelek and Sacks, 1989; Taylor et al., 1990; Wiemer and Wyss, 2000). This deﬁnition is not strict in a mathematical sense, 684 and is connected to the assumption of a power-law behavior of the larger magnitudes. Below Mc, a fraction of events ismissed by the network (1) because they are too small to be recorded on enough stations; (2) because network operators decided that events below a certain threshold are not of interest; or, (3) in case of an aftershock sequence, because they are too small to be detected within the coda of larger events. We compare methods to estimate Mc based on the assumption that, for a given volume, a simplepower-law can approximate the frequency-magnitude distribution (FMD). The FMD describes the relationship between the frequency of occurrence and the magnitude of earthquakes (Ishimoto and Iida, 1939; Gutenberg and Richter, 1944): log10 N(M) a bM , (1)
where N(M) refers to the frequency of earthquakes with magnitudes larger or equal than M. The b-value describes the relative size distribution of...