[Return Home]Forensic Applications Consulting Technologies, Inc.

Click here for our Home Pages





Radon Risk and Cancer




Caoimhín P. Connell
Forensic Industrial Hygienist


Introduction

A large portion of the general population is under the impression that the scientific community has concluded that exposure to indoor radon conclusively causes cancer, and that there is scientific consensus of this "fact." Most people are not aware of the fact that there are actually no conclusive studies that have ever demonstrated that exposure to indoor radon, as commonly seen in the overwhelming vast majority of houses, increases the risk of cancer by any amount, and in fact, in the larger and better studies, what we see is that the risk of cancer actually goes down with increasing radon concentrations; to a critical elevated level (not seen in houses) wherein the risk then begins to rise. But those kinds of radon levels where risk increases is virtually never seen in houses.

In fact, even the EPA, buried deep within its risk estimates,3 very clearly reports that it has no evidence that the risk increases, and that even their studies conclude that as radon concentrations in in homes go up, lung cancer rates go down.

The elevated estimates of risk typically reported to the general population have come exclusively from (discredited) mathematical models, supported by a billion dollar radon industry (including academia), for which there is actually very little to no actual scientific support.

The purpose of this discussion is to demonstrate that what people think a scientific paper says, and what the scientific paper may actually say, may in fact be two very different things.

Some visitors who have read our general web discussion on radon wherein we discussion some of these issues, ask about the validity of using risk assessments (mostly by the US EPA), that were published ten or more years ago. Regardless of the date of the risk assessment article, radiation remains the same, human physiology is still the same, and no new profound advances have been made in Euclidean mathematics. As such, although our general web discussion on radon references EPA documents written perhaps ten years ago, the gravamen of the original discussion remains pretty much the same and current with today’s thinking

Some people in the radon industry reference newer articles arguing that newer scientific studies are available that conclusively prove, beyond any doubt, that exposure to residential radon, at concentrations normally seen in houses causes cancer. As an example, one individual, who earns his living by selling radon testing services - and therefore has a vested interest in promoting the notion that indoor radon causes cancer- pointed to three new published studies claiming the new studies prove that residential radon exposure causes cancer. FACTs agreed to provide a critical review of the papers; and we have presented those reviews here.

The reviews presented here have been written for a non-scientific audience. As such, in some cases, the specific language used may lack our normal precision for the sake of clarity. As will be seen, none of the papers support the argument that exposure to radon, as typically seen in residences, poses a demonstrable or significant threat to public health.

The presentation of these papers as a supposed “proof” is a symptom of our society wherein information is presented by people who have never actually read the scientific data or studies, and have no real idea as to what is actually being said by the technical authors. We see this happening more and more, especially in the area of environmental issues such as global warming, wherein the general public has been duped into thinking that there is a scientific basis for the proposition, and that there is a scientific consensus; for which there is no science to support the claims, but there is a lot of emotion created by heavy handed politics. As such, a large fraction of western society presumes that if their government has made a policy regarding the risks of some particular event, then by that very fact, there necessarily must be credibility in the claims.

David Schoonmaker, editor of American Scientist the journal of Sigma Xi, the century old Scientific Research Society very succinctly addressed this issue in a recent, (December 2009) editorial 2 when he stated:

The intersection between science and policy has been the scene of more than a few fender benders. For that reason, many scientists avoid that part of town all together, preferring instead to let the policy makers seek out their findings and interpret them as they see fit.”

“Fitness,” unfortunately, to the exclusion of objective facts, usually has a large factor of self-interest involved. Again, one need only witness the “global warming” fiasco for evidence of this concept in action.

Paper Number 1

LUNG CANCER RISK AMONG FORMER URANIUM MINERS OF THE WISMUT COMPANY IN GERMANY” by Brüske-Hohlfeld, I; Rosario, AS; Wölke, G; et al. Health Physics March 2006, Volume 90, Number 3.

Overall, this study, published in 2006, appears to have been carefully thought out, properly vetted and the authors used appropriate methodologies for the task at hand. The authors did an excellent job and the work carries considerable credibility and is a valuable reference.

The authors appear to have carefully reviewed their study plan prior to conducting the study, and placed considerable effort into attempting to identify gross potential confounders. The authors explain the identified confounders and how they dealt with those confounders. The authors also explain their concerns with several aspects of the “selectional bias” associated with their study. (Selectional bias is a standard epidemiological concept that is associated with virtually all such studies). Although the authors identified and discussed several biases associated with their study, we have only addressed the more pertinent ones here.

The conclusions by the authors appeared to be well supported by their observations, and well within the limits of their observations, errors, biases, and confounders as identified by the authors themselves and spelled out in the paper. Overall, the methods and practices employed by the authors were within acceptable scientific parameters and acceptable epidemiological practices.

The conclusions of the authors are consistent with known science, consistent with our own opinions at Forensic Applications, Inc. and consistent with what we have discussed in our lectures for many years.

The authors tested the following hypothesis: “Uranium miners (especially those who were pressed into forced labor in horrific underground mine conditions) do not have an higher lung cancer rate than other members of the same mining company who did not work underground in those conditions .”

The authors were unable to find the evidence needed to support the hypothesis, and therefore, they were forced to accept the null hypothesis, namely: “Uranium miners (especially those who were pressed into forced labor in horrific underground mine conditions) do have an higher lung cancer rate than other members of the mining company who did not work underground in those conditions .”

Contrary to the argument that the study proves that residential radon causes cancer, the study in no way whatsoever addressed the general risks of lung cancer associated with exposure to radon at concentrations normally seen in houses. The study in no way whatsoever addressed the risks of lung cancer for occupants of houses who may be exposed to radon. The study cannot in any way be used as a reference when discussing exposures to radon at concentrations normally found in homes, or indeed, even if those concentrations were as high as those found in underground uranium mines, since the parameters of the study and the selectional bias employed by the authors (as honestly described by the authors themselves) were geared to addressing only the stated hypothesis, and no other question.

The study looked at two groups; the cases (the exposed group) and the controls (the “un-exposed” group). The cases consisted exclusively of males who had been pressed into slave-labor at an underground uranium mine and many of those miners continued on at the mine, under harsh Soviet control, as employees after the slave labor had ended.

The study was a type of epidemiological investigation known as a “retrospective case-control study.” The study also fits into an epidemiological classification known as an “ecological study” which traditionally has a lower intrinsic confidence than other kinds of epidemiological investigations. The reason for this is multi-faceted, but certainly not least of all, the authors themselves expressed low confidence in their exposure data (as described later).

At the heart of all such studies is what is known as a “dose-response evaluation.” In risk modeling, we attempt to look at the biological endpoint (the response) to any given dose. This can be very difficult if one doesn’t know what the dose was in the first place. Due to the fact that much of the mining work in the study was performed under forced labor, and then later, under the notoriously harsh conditions of industrializing Soviet power, no exposure data was available to the authors for their early “employees” and very little actual exposure data was available to the authors after that. The authors state:

“The early period of mining at WISMUT during 1946–1955 was characterized by dry drilling underground and no artificial ventilation, which probably led to a very high exposure to dust and alpha radiation (Autorenkollektiv 1999), but no measurements of radon concentrations exist for this period. The [exposures] were assessed retrospectively based on the earliest available radon gas measurements from 1955 and taking into account uranium deposit and delivery, ventilation, and mine architecture over time.”

The authors then go on to document that it was not until 1966 that regular measurements of radon decay products were introduced. Therefore, the authors were forced to guess what those exposures may have been and then use those guesses in their model. However, all comparative data upon which those guesses were to be made are from air monitoring that was performed in modern mines under conditions of modern mining techniques as opposed to forced labor with no ventilation systems in place.

Guesswork like this is not unusual, and is not the fault of the authors, and the authors have made do with what they had to work with. However, the guesswork obviously reduces the confidence in the dose-response evaluation. Here, the authors reference the study on the Colorado Plateau mining cohort and state:

“An increased risk of lung cancer associated with radon and its progeny among underground miners is well established. This evidence, however, is nearly completely based on cohort studies, which often suffer from lack of information on potential confounding factors such as smoking or information on occupational exposures outside the employment at the uranium mine.”

What they are referring to was the problems the United States Environmental Protection Agency (EPA) saw in its own data on the mining cohort when it (the EPA) said:

Exposure in the U.S. cohort is poorly known; cumulative WLM (CWLM) are calculated from measured radon levels for only 10.3 percent of the miners...and guesswork is used for about 53.6 percent of the miners. 1.

So this study experienced similar difficulties as the studies upon which it was built. Additionally, a noted confounder in this study was the lack of exposure information of the cases (as opposed to the controls) which the authors describe thusly:

For most study participants (97.6% of controls and 52.3% of cases), a complete working history could be gathered…

Therefore, for rough half of their cases, the authors acknowledge that the observed lung cancer could have been due to something completely unrelated to radiation exposure in the mines. Their uncertainty extended even to their controls wherein, the authors expressed concern about the validity of their own decision to intentionally exclude 14 cases of cancer in the control group (by so doing, they explain that they may have artificially increased the apparent cancer risk due to the mine radiation). However, they made the exclusion anyway partially because their study already surprised them in that the risks they found among the miners was lower than they expected.

Also unexpected was their observation that the cigarette smoking component in their model DECREASED the risk of lung cancer. (That is, all other factors being equal, the cases who smoked cigarettes had a lower risk of cancer). The authors don’t for a moment argue that is the case, but rather, they are honestly reporting the observation. I too don’t think the observation is something that can be generally drawn into the case group; rather, I suspect that the observation emerged as an “artifact.” Artifacts are observational oddities or anomalies for which one cannot give a suitable explanation within the limits of the model. Frequently, the artifact is a product of the model! (Since all such studies necessarily must contain selectional and other biases). It would be rather like developing a model to study the average speed of meteors entering Earth’s atmosphere and finding one meteor whose speed is Zero (i.e., it just hangs in the atmosphere). You know it can’t be right, but you honestly report the observation since that is what the model showed, and you are more interested in credibility than in trying to prove a point (unlike the global warming proponents, who ignore the objective facts including the fact that their models have entirely failed to predict all past climatic events and none of their future predictions have materialized!)

The authors of the radon study used a type of a model called a “linear, no-threshold, dose-response curve.” No serious epidemiological or toxicological investigator actually believes for a moment that such a model represents reality in these kinds of studies, but we use it very often since it greatly reduces the complexity (and the necessary mathematics) to test hypotheses.

In this case, the authors do not profess the linear model was appropriate, rather, they honestly state:

Under the assumption of a linear risk model, there was a significant increase in the relative risk of 0.10 per 100 WLM after adjusting for smoking and asbestos exposure.

Then, throughout their discussion, the authors honestly and forthrightly point out places where their linear model fell apart and the data argued against the validity of the linear dose-response curve.

For example, the authors quite openly and forthrightly sate:

Most cohort studies of miners used a linear risk model for data analysis (NRC 1999; Lubin and Boice 1997), which also provided a good description of the exposure-response relationship in this case-control study. However, one should bear in mind that the categorical risk estimates in our analysis did not show a linear increase below 800 WLM. Whether this relationship is real or artificially induced by errors in exposure quantification (Birkett 1992) or selection bias is hard to tell.

(For those of you who don’t know, 800 WLMs is a HUGE amount of radiation, that is not likely to be seen in but perhaps an handful of houses in the entire North American Continent.

Regarding the linearity, the authors also state that they did not see a statistically significant increase in lung cancer until the presumed or demonstrated exposes EXCEEDED 800 WLMs.

Additionally, the authors also explain that in their study:

Lung cancer risk declined with time since exposure, except for exposures received 45 or more years ago.

This means that the linear model fell apart since another name for the linear, no-threshold, dose-response curve model is called the “one-hit theory of carcinogenicity.” Which, if valid, means that there is essentially no time component, and the risks should be dependent only on the exposure, which, as the authors explain, they saw a time component associated with the risk, wherein risk dropped for equal exposures where the cessation time of exposure increased.

However, the question of linearity really becomes moot when one remembers that the actual exposures in the earlier part of the study were completely unknown anyway, and the fact that the standard deviation (the error) of the known (measured) exposures was also HUGE (the actual statistics are given in the report).

In conclusion:


The authors found an increased risk of lung cancer among uranium miners who worked underground in harsh conditions when exposures, as defined in their study, exceeded 800 WLMs. The finding is entirely consistent with what we have observed for years and argued in the past.

The authors conclude that smoking resulted in DECREASED risk of lung cancer in their case cohort. Although the authors did not explicitly call the observation an artifact, I suspect that it is, and I suspect the authors themselves would concur.

The study found that linearity in the dose-response curve did not withstand the rigors of reality. I concur and this is consistent with what I have maintained during my lectures.

In short, the authors found that if you were a slave, forced to work in a Soviet underground uranium mine with no ventilation, you had an higher probability of contracting lung cancer. I don’t have a problem with that conclusion

Paper Number 2

Radon in homes and risk of lung cancer: collaborative analysis of individual data from 13 European case-control studies
Darby S; Hill D; Auvinen A, et all.

The article was sent to me as a PDF file downloaded from www.bmj.com. An home inspector submitted this file to me as an example of a scientific paper that supposedly proved that residential exposure to radon caused cancer.

In the hierarchy of epidemiological studies, the type of “study” that traditionally carries the least weight, and least credibility is known as a “meta-analysis.” Meta-analyses are just as the name implies – “meta-“ a change in position; behind, or after. Meta-analyses are reviews of collections of other people's work, wherein the author has no first hand knowledge of the work, and the author has no quality control over what was originally published. Very often (as was the case here), the author does not explore confounder or bias. A meta-analysis study can be hammered out in an afternoon, since, often, there is no new research that goes into the process.

The paper presented for my review, however, is even lower on the "weight of evidence" scale in that not only was it a meta-analysis but, included in the collection of reviewed studies was at least one other meta-analysis. In fact, the paper presented for our review by the critic wasn’t even a real meta-analysis at all, but rather it was an abstract of a meta-meta-analysis.

The paper, from a scientific weight of evidence perspective, was like paying an hundred bucks for a ticket to a piano recital and upon arrival, learning that the pianist is the kid next door. After sitting through his third attempt at “Twinkle, twinkle, little star” one grows suspicious that the little guy has more aptitude in choppin’ than Chopin. This paper is a good example of bad science, the likes of which is seldom seen outside of the nonsense issued by R. William Field, University of Iowa, (who actually goes so far as to adopt fake names to support his unscientific claims, and laud his own greatness).

The nature of the paper can be thought of less than a study, and more of an high school styled “book report.” The structure of the paper, complete with 26 (twenty six) authors for a three page review suggested that the paper was probably a class project wherein the first author was probably the university professor, the middle 24 authors, were his students, and the last author, listed merely as "R Doll" was (I am guessing) the preeminent epidemiologist, Sir William Richard Shaboe Doll.

I would speculate that the students alternate the order of their names and repetitively submit the same paper, with minor changes to various journals, so they each get a chance to be one of the top authors. I will also speculate that the last author, R Doll, never even read the paper.

As I mentioned, the paper is essentially a “book-report” that looked at 13 European studies on residential radon exposure and cancer risk; NONE of which, according to the authors, supported a link between radon exposure in homes and lung cancer. So it is interesting that the submitting critic selected this particular paper thinking it supported his position when the overwhelming information referenced in the paper actually opposes his position.

The students conclude that although the 13 studies they selected do NOT show an appreciable hazard from residential radon, they contend (but don’t explain why they so contend) that the reason why the studies do not show an appreciable risk due to radon is because although…

Studies to estimate directly the risk of lung cancer associated with residential radon exposure have been conducted in many European countries. Individually, none has been large enough to assess moderate risks reliably (2)(3).

In this opening statement, the students (the authors) acknowledge two things: 1) their foundational studies don’t support an appreciable risk of cancer from residential exposures to radon, and 2) if the risks are there, they are thought to be "moderate."

The students then go on to say:

Greater statistical power can be achieved by combining information from several studies, but this cannot be done satisfactorily from published information.

And yet, this is precisely what the students then go on to attempt. In truth, the comment lacks precision since sometimes, under some circumstances, the data from some studies may be combined to gain greater statistical power. However, in this paper, the students do not go on to explain why that is the case here and since the alternative statement is equally true; that is: “Since sometimes, under some circumstances, the data from some studies may be combined to multiply confounders, and therefore obtain less statistical power…” ... in the absence of knowing why the students believe their foundational studies are appropriate candidates for combination, the whole thing remains a big mystery.

A foundational statement for the students is:

Air pollution by radon is ubiquitous. Concentrations are low outdoors but can build up indoors, especially in homes, where most exposure of the general population occurs. The highest concentrations to which workers have been routinely exposed occur underground, particularly in uranium mines. Studies of exposed miners have consistently found associations between radon and lung cancer.

And the tacit association is that these exposures and risks to miners can be extrapolated to residential exposures and risks. But if we go and look at the student's references, we see that the references do not really support what the students’ necessarily think they are saying. In fact, the students, then immediately begin to back-peddle and say:

Extrapolation from these studies is uncertain but suggests that residential radon, which involves lower exposure to many more people, could cause a substantial minority of all lung cancers.

At Forensic Applications, Inc. the role of some of our personnel involves conducting forensic interviews during criminal investigations. During those interviews and interrogations, our personnel are trained (and certified) to detect deceptive behaviors.

Look at the language used by the students in the report: The student’s statement is fraught with an excessive use of qualifiers, which are used to divert attention away from the gravamen of the argument. Further, the students state “a substantial minority of all lung cancers". If this language were used in an interview, the forensic investigator would conclude that the interviewee exhibited classical deceptive speech patterns. (Which is not necessarily to say that they are “lying.”)

But what the students don’t say is that the uncertainty is so large, those same studies also suggest there is NO appreciable risk from residential radon. (And what about the 13 studies they used for their meta-analysis which, according to the students, show NO appreciable risk?). This paper should make a good reference for high school science teachers as a bad example of scientific work and reporting.

Another point the students overlooked was the question of cohabitating matched pairs wherein one (a case) comes down with lung cancer, but the other matched for residency (a control?) doesn’t. How come? This is not addressed in the paper.

The students, however, do go on to explain that they didn’t really have a lot of actual measurements for some of their foundational studies, so, they decided to make up some data!

One would have to wonder how the original authors upon which the meta-analyses were based would respond to someone making up data for their original studies. If the inclusion of the made-up data were valid, why didn’t the original authors include such data or estimates in their original papers?

To make up their data, the students “estimated” exposures. However, they didn’t really explain the validity of their “estimates” or how they derived those estimates, and the explanation they did give sounds more like they created data or “guessed,” rather than “estimated.” And so they guessed what a lot of the exposures were in their paper. And yet, in their paper they consistently refer to the made-up exposures as “measurements” when in fact, the created data were not measurements of anything at all.

The students don’t explore or explain the possible selectional bias of each of the 13 studies and how multiplication of those biases may occur. The students don’t go on to explain the potential association or interrelation of any of the studies to any other study, something that may profoundly effect a meta-analysis. Here’s why, imagine the following sequence of events:

Dr. Greene sets out to study some anecdotal evidence that storks cause babies. He looks at birth rates, and compares them against stork populations, and discovers a clear and consistent correlation between the two; he reports that he has established an “association” between storks and babies.

Dr. Blue, includes by reference, Dr. Greene’s findings, and reproduces the raw data and the conclusions.

Dr. Black gets bored one day and decides to study a long-gone fable about storks bringing babies. He does a literature search and finds a paper on the subject matter by Dr. Green, and another paper, by some completely different, unassociated, guy named Dr. Blue. Dr. Black decides to perform a comparative “study” to see how closely Dr. Blue’s data fit Dr. Green’s data. Dr. Black reports that Dr. Blue’s data is “statistically consistent” with Dr. Greene’s study.

Dr. Grey, finds the whole thing amusing and reports Black’s findings in the Journal of Creative Skepticism.

Jason O’Silverstein Jr., a fictitious student of journalism, decides to do a class project on medical science vs. folklore and chooses the old Eastern European chestnut about storks and babies. Jason trawls the internet and finds four studies by four different MDs and decides to perform a meta-analysis on their studies. Since he never was too good at epidemiology, he really didn’t want to get into the problem of compounding confounders. In his final report, he entirely fails to realize that he doesn't have four studies, he has one report that was repeated three different times. However, in his paper, he tells his readers that:

Studies to estimate directly the contribution of storks to birth rate have been conducted in many European countries. Individually, none has been large enough to assess the association reliably. Studies by medical doctors have consistently found associations between storks and babies. (2),(3). Extrapolation from these studies is uncertain but suggests that storks could cause a substantial portion of all minority babies. Greater statistical power can be achieved by combining information from several studies, but this cannot be done satisfactorily from published information. So that is exactly what I’ve done.

Recognize the language? This paper is not science, it is an exercise in tautology. However, there are some notable gems:

Statistical methods
We assessed the association between radon and lung cancer in two ways. Firstly, a model was fitted in which the additional risk of lung cancer was proportional to measured radon.

This is interesting, since most of the time, scientists use models to predict an outcome based on known inputs (i.e. we attempt to predict the number of cancers based on the radon exposure). Here, the students seem to be fitting the model to the outcome (Making the inputs fit the outcome). How can one do this? Well, it’s easy, since the students made up the exposure data in the first place; one just needs to keep making up different exposures until one gets the model to fit! (Even the global warming proponents haven’t tried that … yet)

Also, the students say their model showed that smoking didn’t have an effect on lung cancer. But if smoking is a major cause of lung cancer, and radon (according to the students) only could cause a substantial minority of all lung cancers, but ... smoking exhibits less of a risk than radon… well the whole argument becomes too circuitous to follow

In conclusion:
This paper is an abstract of a book report of a meta-analysis of at least one other meta-analysis that never should have made its way through a respectable peer review process, and is not likely to be used as a reference material by anyone who is a serious epidemiologist or Industrial Hygienist who may have to take the stand and defend it.

The paper did not represent a study, and did not contain any new research. The paper was an abstract of a report on a meta-analysis that was based on 13 studies that showed that there was no appreciable risk of cancer from residential radon exposure.

The authors of this three page review, (all 26 of them!), reported that although none of the studies in their meta-study showed an appreciable risk of lung cancer due to radon exposure, their work (based on those studies) indicate that radon is responsible for 2% of ALL cancers in Europe. (This would delight the asbestos industry! But they will never reference this paper for fear of being ridiculed by it's inclusion.)

Paper Number 3

Residential Radon and Risk of Lung Cancer in Eastern Germany Kreuzer, M; Heinrich, H; Wölke, G; et al, Epi 2003; 14

Overall, the study appears to be very well constructed, very well thought out, and meticulously conducted. The authors appear to have very carefully planned the study giving tremendous amount of effort to understanding and, hopefully, address potential confounders and biases.

Ultimately, the credibility of an epidemiological study assessing cause and effect, lies in four elements: 1) Properly estimating dose, 2) Properly identifying biological end points, 3) Properly addressing confounders, and 4) Properly addressing bias.

In this case, the authors appear to have worked very hard to ensure that each of these elements have been met, to within a reasonable expectation. The study, appears to have been conducted in accordance with good experimental methodologies, and good scientific principles, and the final work product has exhibited an high degree of epidemiological aptitude, and as a result, the study displays very high credibility, and is a valuable reference.

The study, however, does not support the argument that exposures to residential radon conclusively and significantly increase the risk of lung cancer. I suspect, that once again, just as in previous two cases the critics never actually read the study, but rather assumed, based on the title, that the study must support the argument of increased risk since the title of the paper sounded rather official. I will demonstrate why I think the critic never read past the title of this study in a moment

Background
The authors of the study set out to characterize risk vs. residential radon exposure. Their tacit hypothesis was: There is no correlation between lung cancer and radon exposure among individuals who are exposed to elevated residential radon. The authors tested the hypothesis and failed to find supporting evidence for the hypothesis and (appropriately) accepted the null hypothesis.

To test their hypothesis, the authors selected an area in Eastern Germany whose radon concentration is so high, that the normal OUTDOOR concentrations to which people are exposed, is HIGHER than most indoor readings on the North American Continent.

The authors selected this area precisely because it did NOT represent normal residential radon concentrations. They did this since they realized that if an association actually did exist between residential radon and lung cancer, the association was so weak, that it probably could not be observed in normal residences.

Indeed, the authors recognized the association was in fact so weak, that special attention needed to be given in this study to smoking, or the risks due to radon exposures may not be observable (if they exist at all).

The type of study is known as a “case-control” study wherein the life styles of predetermined lung cancer patients (the cases) are compared to “randomly” selected people in an area (the controls). The case-control method automatically introduces a type of an error called “systematic error” which may “bias” the results one way or the other, since in fact, the controls are not random, and the cases may not actually have cancer as a result of radon exposure. The authors, however, adequately explain how they worked to correct for such biases, however, they admit that at this point in time, we simply cannot adequately control for bias in all studies.

The first sentence of the study’s prologue sets the stage for the presentation and demonstrates that the critic never read this paper because the authors begin with:

There is suggestive evidence that residential radon increases lung cancer risk.

To those of us who work in the field of epidemiology, the term “suggestive evidence” is a standard concept which speaks to “association,” but not to “cause.” In my second review, (found above ) I gave an example of this concept where a fictitious researcher (Dr. Greene) concluded (very correctly) that “There is suggestive evidence that storks increases babies.” Dr. Green did not conclude that storks cause babies, but rather, correctly observed and reported a strong and consistent association between storks and babies; this is suggestive evidence.

Another reason why I believe the critic did not actually read this study before proffering it as support for his argument is because the THIRD sentence in the paper that he claimed was proof of scientific consensus that residential radon exposures cause cancer, makes the following statement:

A direct transfer of risk estimates derived from studies of miner to residential environments is not possible due to substantial differences in the levels of radon exposure, confounding factors … differences in age and sex of affected subgroups, and differences in other physical factors such as breathing rate, the size distribution of aerosol particles, the unattached fraction of radon progeny and others.

This statement is consistent with good science, known facts, and my opinion for almost 20 years. This statement is NOT consistent with journalists and other misinformed members of the general public who ignore the vast overwhelming number of studies that hold this position, but who claim that miner’s studies prove that residential exposures to radon causes cancer (for which not a single valid study on the Planet Earth exists).

The authors of the present study, then lay the groundwork for what IS known:

In the past decade a series of well-conducted epidemiological studies has investigated the risk of lung cancer in relation to indoor radon exposure directly via case-control studies. Some of these studies have found a statistically significant increased lung cancer risk, and other studies have not.

The idea that all scientists conclude that there is concrete evidence to support the notion that exposure to residential radon significantly increases the risk of lung cancer is simply not true. Indeed, all of the knowledgeable scientist whom I know and work with have opinions that are similar to mine – they HAVE to be, because that is what the objective scientific data show.

The authors of this article note that for ALL of these studies, uncertainty in assessment of exposure, low statistical power and a limited range of radon concentrations impede these studies. In one study, referenced by these authors since it speaks to roughly the same geographic areas, which had good statistical power, the authors tell us:

From 1990 to 1996, a case-control study of lung cancer and indoor radon (comprising 1,449 cases and 2,297 controls) was conducted in Western Germany. There was no association between lung cancer and radon exposure across the entire study area.

I’m not going to go into individual confounders or bias in this study except to say the authors went to extreme lengths to understand potential confounders and bias and honestly address them. In my opinion, they did an exceptionally fine job. However, I do need to address two epidemiological points that the authors either appeared to overlook, or they implicitly addressed the issues by reference and I missed it.

The first issue deals with an epidemiological concept known as “clusters.” I’m not going to go into “clusters” in any great detail, otherwise this review would extended onto many pages on just that issue alone. Except to say, the cases in this study appeared to resemble what is known as a "cluster." A recognized difficulty in the investigation of a cluster is that there usually are no predetermined boundaries (spatial or temporal). Rather, (and possibly in this study), the illnesses has defined the boundaries of the cluster.

That is, "the tail has wagged the dog" instead of the other way around. It is possible that in this otherwise exceptionally well conducted study, the cluster may (or may not) have been erroneously defined by the authors. It is generally considered impossible, except in unusual situations, to determine whether the number of cases is in excess of the number that might be attributable to chance without a priori defined boundaries. Those a priori defined boundaries did not exist in this study.

A second, related, issue involves an epidemiological concept known as “necessary cause vs. contributing factor.” If a disease appears to have a particular single etiology, it may be because we have defined that disease in terms of that cause (however inappropriately), and we now end up with what is known as “necessary cause,” instead of a more appropriate classification of “sufficient cause” or “contributing factor.” Here, like a cluster, I wonder if the authors have not crossed this line – since the selectional bias observed by the authors themselves suggest this weakness, but they have not explicitly addressed it. I simply don't know.

In a nut shell, the authors, consistent with other studies, and consistent with what I have already discussed here, found that the risk of lung cancer behaved in a non-linear fashion and the risk went DOWN as the radon concentration went UP, and then, above a certain point, reversed and the risk began to increase with increasing concentrations of radon.

Also, the authors, undaunted by negative correlations, appropriately handled the negative confounder observed with smoking.

Bottom line: The authors appropriately expressed concerns about their data, and methods, and in the differential diagnosis, found that in this area where even outdoor concentrations are higher than most indoor US concentrations, and indoor concentrations are much, much, greater than in the US:

Our findings suggest a moderate increase in lung cancer risk, which is most pronounced among small cell lung cancer.

The authors place their findings in contrast with other notable studies thusly:

We found a moderate increase in lung cancer risk as a result of residential radon, which is in agreement with the results of previous studies that included direct, long-term measurement of radon using alpha-track detectors. [Three of these] studies reported a statistically significant ERR for an increase of 100 Bq/m3; [six of these studies] observed an elevated ERR that did not achieve statistical significance; whereas no clear effect was found in the remaining [five] studies.

In other words, statistically significant elevated risks were observed in only 3 out of 14 similar good epidemiological studies (three studies showed a risk, and 11 studies failed to show a significant risk).

In conclusion:
The findings of these authors are consistent with what I have been saying for almost 20 years now, and are consistent with the opinions and discussions I have presented on our web discussion.

The findings are not consistent with the fear-mongering hype of journalists and property inspectors or real estate agents who try to frighten people with nonsensical newspaper reports that quote other newspapers as authoritative figures, and who repeatedly reference studies they have never read, and probably won’t ever read, since those very studies they reference oppose their positions

The discussions found on our web site, remain valid until we receive information that will contradict those statements

References:

1 Risk Assessment Methodology, Environmental Impact Statement, NESHAPS for Radionuclides, Background Information Document- Volume 1. EPA/520/1-89-005, September, 1989)

2 American Scientist Vol. 97, No. 6, Dec. 2009 p.434

3Assessment of Risks from Radon in Homes (United States Environmental Protection Agency; Air and Radiation (6608J) EPA 402-R-03-003, June 2003)


This page was created on March 11, 2007, updated July 7, 2007 and will be updated with any new and germane information.




To return to our indoor radon discussion click here.

Visitors to this page generally have an interest in scientific issues. If you are interested in such matters, you may find some of our other discussions interesting.


To visit our state-of-knowledge mould page, click here.


A discussion concerning myths surrounding duct cleaning, can be found by clicking here.

For a discussion concerning indoor air quality, click here.

For issues surrounding the history and cause of carpal tunnel syndrome click here.

For a discussion concerning the myths associated with laboratory fume hood face velocities click here.

For a discussion concerning laboratory fume hood evaluations, click here.

Finally, for a listing of documents associated with the ground-breaking State of Colorado regulations concerning methamphetamine laboratories (meth-labs), click here.


Visit our main page!
[logojpg]Forensic Applications Consulting Technologies, Inc.


Feel free to send Forensic Applications, Inc. an email directly by clicking here

[185 Bounty Hunter's Lane, Bailey, Colorado Phone 303-903-7494]