Synonym Confounding Variables Assignment

Posted on by Akit

⇐ Previous topic|Next topic ⇒
Table of Contents


Confounding variables


Summary

A confounding variable is a variable, other than the independent variable that you're interested in, that may affect the dependent variable. This can lead to erroneous conclusions about the relationship between the independent and dependent variables. You deal with confounding variables by controlling them; by matching; by randomizing; or by statistical control.

Introduction

Due to a variety of genetic, developmental, and environmental factors, no two organisms, no two tissue samples, no two cells are exactly alike. This means that when you design an experiment with samples that differ in independent variableX, your samples will also differ in other variables that you may or may not be aware of. If these confounding variables affect the dependent variable Y that you're interested in, they may trick you into thinking there's a relationship between X and Y when there really isn't. Or, the confounding variables may cause so much variation in Y that it's hard to detect a real relationship between X and Y when there is one.

As an example of confounding variables, imagine that you want to know whether the genetic differences between American elms (which are susceptible to Dutch elm disease) and Princeton elms (a strain of American elms that is resistant to Dutch elm disease) cause a difference in the amount of insect damage to their leaves. You look around your area, find 20 American elms and 20 Princeton elms, pick 50 leaves from each, and measure the area of each leaf that was eaten by insects. Imagine that you find significantly more insect damage on the Princeton elms than on the American elms (I have no idea if this is true).

It could be that the genetic difference between the types of elm directly causes the difference in the amount of insect damage, which is what you were looking for. However, there are likely to be some important confounding variables. For example, many American elms are many decades old, while the Princeton strain of elms was made commercially available only recently and so any Princeton elms you find are probably only a few years old. American elms are often treated with fungicide to prevent Dutch elm disease, while this wouldn't be necessary for Princeton elms. American elms in some settings (parks, streetsides, the few remaining in forests) may receive relatively little care, while Princeton elms are expensive and are likely planted by elm fanatics who take good care of them (fertilizing, watering, pruning, etc.). It is easy to imagine that any difference in insect damage between American and Princeton elms could be caused, not by the genetic differences between the strains, but by a confounding variable: age, fungicide treatment, fertilizer, water, pruning, or something else. If you conclude that Princeton elms have more insect damage because of the genetic difference between the strains, when in reality it's because the Princeton elms in your sample were younger, you will look like an idiot to all of your fellow elm scientists as soon as they figure out your mistake.

On the other hand, let's say you're not that much of an idiot, and you make sure your sample of Princeton elms has the same average age as your sample of American elms. There's still a lot of variation in ages among the individual trees in each sample, and if that affects insect damage, there will be a lot of variation among individual trees in the amount of insect damage. This will make it harder to find a statistically significant difference in insect damage between the two strains of elms, and you might miss out on finding a small but exciting difference in insect damage between the strains.

Controlling confounding variables

Designing an experiment to eliminate differences due to confounding variables is critically important. One way is to control a possible confounding variable, meaning you keep it identical for all the individuals. For example, you could plant a bunch of American elms and a bunch of Princeton elms all at the same time, so they'd be the same age. You could plant them in the same field, and give them all the same amount of water and fertilizer.

It is easy to control many of the possible confounding variables in laboratory experiments on model organisms. All of your mice, or rats, or Drosophila will be the same age, the same sex, and the same inbred genetic strain. They will grow up in the same kind of containers, eating the same food and drinking the same water. But there are always some possible confounding variables that you can't control. Your organisms may all be from the same genetic strain, but new mutations will mean that there are still some genetic differences among them. You may give them all the same food and water, but some may eat or drink a little more than others. After controlling all of the variables that you can, it is important to deal with any other confounding variables by randomizing, matching or statistical control.

Controlling confounding variables is harder with organisms that live outside the laboratory. Those elm trees that you planted in the same field? Different parts of the field may have different soil types, different water percolation rates, different proximity to roads, houses and other woods, and different wind patterns. And if your experimental organisms are humans, there are a lot of confounding variables that are impossible to control.

Randomizing

Once you've designed your experiment to control as many confounding variables as possible, you need to randomize your samples to make sure that they don't differ in the confounding variables that you can't control. For example, let's say you're going to make 20 mice wear sunglasses and leave 20 mice without glasses, to see if sunglasses help prevent cataracts. You shouldn't reach into a bucket of 40 mice, grab the first 20 you catch and put sunglasses on them. The first 20 mice you catch might be easier to catch because they're the slowest, the tamest, or the ones with the longest tails; or you might subconsciously pick out the fattest mice or the cutest mice. I don't know whether having your sunglass-wearing mice be slower, tamer, with longer tails, fatter, or cuter would make them more or less susceptible to cataracts, but you don't know either. You don't want to find a difference in cataracts between the sunglass-wearing and non-sunglass-wearing mice, then have to worry that maybe it's the extra fat or longer tails, not the sunglasses, that caused the difference. So you should randomly assign the mice to the different treatment groups. You could give each mouse an ID number and have a computer randomly assign them to the two groups, or you could just flip a coin each time you pull a mouse out of your bucket of mice.

In the mouse example, you used all 40 of your mice for the experiment. Often, you will sample a small number of observations from a much larger population, and it's important that it be a random sample. In a random sample, each individual has an equal probability of being sampled. To get a random sample of 50 elm trees from a forest with 700 elm trees, you could figure out where each of the 700 elm trees is, give each one an ID number, write the numbers on 700 slips of paper, put the slips of paper in a hat, and randomly draw out 50 (or have a computer randomly choose 50, if you're too lazy to fill out 700 slips of paper or don't own a hat).

You need to be careful to make sure that your sample is truly random. I started to write "Or an easier way to randomly sample 50 elm trees would be to randomly pick 50 locations in the forest by having a computer randomly choose GPS coordinates, then sample the elm tree nearest each random location." However, this would have been a mistake; an elm tree that was far away from other elm trees would almost certainly be the closest to one of your random locations, but you'd be unlikely to sample an elm tree in the middle of a dense bunch of elm trees. It's pretty easy to imagine that proximity to other elm trees would affect insect damage (or just about anything else you'd want to measure on elm trees), so I almost designed a stupid experiment for you.

A random sample is one in which all members of a population have an equal probability of being sampled. If you're measuring fluorescence inside kidney cells, this means that all points inside a cell, and all the cells in a kidney, and all the kidneys in all the individuals of a species, would have an equal chance of being sampled.

A perfectly random sample of observations is difficult to collect, and you need to think about how this might affect your results. Let's say you've used a confocal microscope to take a two-dimensional "optical slice" of a kidney cell. It would be easy to use a random-number generator on a computer to pick out some random pixels in the image, and you could then use the fluorescence in those pixels as your sample. However, if your slice was near the cell membrane, your "random" sample would not include any points deep inside the cell. If your slice was right through the middle of the cell, however, points deep inside the cell would be over-represented in your sample. You might get a fancier microscope, so you could look at a random sample of the "voxels" (three-dimensional pixels) throughout the volume of the cell. But what would you do about voxels right at the surface of the cell? Including them in your sample would be a mistake, because they might include some of the cell membrane and extracellular space, but excluding them would mean that points near the cell membrane are under-represented in your sample.

Matching

Sometimes there's a lot of variation in confounding variables that you can't control; even if you randomize, the large variation in confounding variables may cause so much variation in your dependent variable that it would be hard to detect a difference caused by the independent variable that you're interested in. This is particularly true for humans. Let's say you want to test catnip oil as a mosquito repellent. If you were testing it on rats, you would get a bunch of rats of the same age and sex and inbred genetic strain, apply catnip oil to half of them, then put them in a mosquito-filled room for a set period of time and count the number of mosquito bites. This would be a nice, well-controlled experiment, and with a moderate number of rats you could see whether the catnip oil caused even a small change in the number of mosquito bites. But if you wanted to test the catnip oil on humans going about their everyday life, you couldn't get a bunch of humans of the same "inbred genetic strain," it would be hard to get a bunch of people all of the same age and sex, and the people would differ greatly in where they lived, how much time they spent outside, the scented perfumes, soaps, deodorants, and laundry detergents they used, and whatever else it is that makes mosquitoes ignore some people and eat others up. The very large variation in number of mosquito bites among people would mean that if the catnip oil had a small effect, you'd need a huge number of people for the difference to be statistically significant.

One way to reduce the noise due to confounding variables is by matching. You generally do this when the independent variable is a nominal variable with two values, such as "drug" vs. "placebo." You make observations in pairs, one for each value of the independent variable, that are as similar as possible in the confounding variables. The pairs could be different parts of the same people. For example, you could test your catnip oil by having people put catnip oil on one arm and placebo oil on the other arm. The variation in the size of the difference between the two arms on each person will be a lot smaller than the variation among different people, so you won't need nearly as big a sample size to detect a small difference in mosquito bites between catnip oil and placebo oil. Of course, you'd have to randomly choose which arm to put the catnip oil on.

Other ways of pairing include before-and-after experiments. You could count the number of mosquito bites in one week, then have people use catnip oil and see if the number of mosquito bites for each person went down. With this kind of experiment, it's important to make sure that the dependent variable wouldn't have changed by itself (maybe the weather changed and the mosquitoes stopped biting), so it would be better to use placebo oil one week and catnip oil another week, and randomly choose for each person whether the catnip oil or placebo oil was first.

For many human experiments, you'll need to match two different people, because you can't test both the treatment and the control on the same person. For example, let's say you've given up on catnip oil as a mosquito repellent and are going to test it on humans as a cataract preventer. You're going to get a bunch of people, have half of them take a catnip-oil pill and half take a placebo pill for five years, then compare the lens opacity in the two groups. Here the goal is to make each pair of people be as similar as possible in confounding variables that you think might be important. If you're studying cataracts, you'd want to match people based on known risk factors for cataracts: age, amount of time outdoors, use of sunglasses, blood pressure. Of course, once you have a matched pair of individuals, you'd want to randomly choose which one gets the catnip oil and which one gets the placebo. You wouldn't be able to find perfectly matching pairs of individuals, but the better the match, the easier it will be to detect a difference due to the catnip-oil pills.

One kind of matching that is often used in epidemiology is the case-control study. "Cases" are people with some disease or condition, and each is matched with one or more controls. Each control is generally the same sex and as similar in other factors (age, ethnicity, occupation, income) as practical. The cases and controls are then compared to see whether there are consistent differences between them. For example, if you wanted to know whether smoking marijuana caused or prevented cataracts, you could find a bunch of people with cataracts. You'd then find a control for each person who was similar in the known risk factors for cataracts (age, time outdoors, blood pressure, diabetes, steroid use). Then you would ask the cataract cases and the non-cataract controls how much weed they'd smoked.

If it's hard to find cases and easy to find controls, a case-control study may include two or more controls for each case. This gives somewhat more statistical power.

Statistical control

When it isn't practical to keep all the possible confounding variables constant, another solution is to statistically control them. Sometimes you can do this with a simple ratio. If you're interested in the effect of weight on cataracts, height would be a confounding variable, because taller people tend to weigh more. Using the body mass index (BMI), which is the ratio of weight in kilograms over the squared height in meters, would remove much of the confounding effects of height in your study. If you need to remove the effects of multiple confounding variables, there are multivariate statistical techniques you can use. However, the analysis, interpretation, and presentation of complicated multivariate analyses are not easy.

Observer or subject bias as a confounding variable

In many studies, the possible bias of the researchers is one of the most important confounding variables. Finding a statistically significant result is almost always more interesting than not finding a difference, so you need to constantly be on guard to control the effects of this bias. The best way to do this is by blinding yourself, so that you don't know which individuals got the treatment and which got the control. Going back to our catnip oil and mosquito experiment, if you know that Alice got catnip oil and Bob didn't, your subconscious body language and tone of voice when you talk to Alice might imply "You didn't get very many mosquito bites, did you? That would mean that the world will finally know what a genius I am for inventing this," and you might carefully scrutinize each red bump and decide that some of them were spider bites or poison ivy, not mosquito bites. With Bob, who got the placebo, you might subconsciously imply "Poor Bob—I'll bet you got a ton of mosquito bites, didn't you? The more you got, the more of a genius I am" and you might be more likely to count every hint of a bump on Bob's skin as a mosquito bite. Ideally, the subjects shouldn't know whether they got the treatment or placebo, either, so that they can't give you the result you want; this is especially important for subjective variables like pain. Of course, keeping the subjects of this particular imaginary experiment blind to whether they're rubbing catnip oil on their skin is going to be hard, because Alice's cat keeps licking Alice's arm and then acting stoned.

References

Picture of elm tree from Imapix.

Picture of kidney cell from Indiana Center for Biological Microscopy


⇐ Previous topic|Next topic ⇒
Table of Contents


"Confounding factor" redirects here. For the defunct British video games company, see Confounding Factor (games company).

In statistics, a confounder (also confounding variable, confounding factor or lurking variable) is a variable that influences both the dependent variable and independent variable causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations.[1][2][3]

Definition[edit]

Confounding is defined in terms of the data generating model (as in the Figure above). Let X be some independent variable, Y some dependent variable. To estimate the effect of X on Y, the statistician must suppress the effects of extraneous variables that influence both X and Y. We say that, X and Y are confounded by some other variable Z whenever Z is a cause of both X and Y.

Let be the probability of event Y = y under the hypothetical intervention X = x. X and Y are not confounded if and only if the following holds:

 

 

 

 

(1)

for all values X = x and Y = y, where is the conditional probability upon seeing X = x. Intuitively, this equality states that X and Y are not confounded whenever the observationally witnessed association between them is the same as the association that would be measured in a controlled experiment, with x randomized.

In principle, the defining equality P(y | do(x)) = P(y | x) can be verified from the data generating model assuming we have all the equations and probabilities associated with the model. This is done by simulating an intervention do(X = x) (see Bayesian network) and checking whether the resulting probability of Y equals the conditional probability P(y | x). It turns out, however, that graph structure alone is sufficient for verifying the equality P(y | do(x)) = P(y | x).

Control[edit]

Consider a researcher attempting to assess the effectiveness of drug X, from population data in which drug usage was a patient's choice. The data shows that gender (Z) differences influence a patient's choice of drug as well as their chances of recovery (Y). In this scenario, gender Z confounds the relation between X and Y since Z is a cause of both X and Y:

We have that

 

 

 

 

(2)

because the observational quantity contains information about the correlation between X and Z, and the interventional quantity does not (since X is not correlated with Z in a randomized experiment). Clearly the statistician desires the unbiased estimate , but in cases where only observational data are available, an unbiased estimate can only be obtained by "adjusting" for all confounding factors, namely, conditioning on their various values and averaging the result. In the case of a single confounder Z, this leads to the "adjustment formula":

 

 

 

 

(3)

which gives an unbiased estimate for the causal effect of X on Y. The same adjustment formula works when there are multiple confounders except, in this case, the choice of a set Z of variables that would guarantee unbiased estimates must be done with caution. The criterion for a proper choice of variables is called the Back-Door [4][5] and requires that the chosen set Z "blocks" (or intercepts) every path from X to Y that ends with an arrow into X. Such sets are called "Back-Door admissible" and may include variables which are not common causes of X and Y, but merely proxies thereof.

Returning to the drug use example, since Z complies with the Back-Door requirement (i.e., it intercepts the one Back-Door path X Z Y), the Back-Door adjustment formula is valid:

 

 

 

 

(4)

In this way the physician can predict the likely effect of administering the drug from observational studies in which the conditional probabilities appearing on the right-hand side of the equation can be estimated by regression.

Contrary to common beliefs, adding covariates to the adjustment set Z can introduce bias. A typical counterexample occurs when Z is a common effect of X and Y,[6] a case in which Z is not a confounder (i.e., the null set is Back-door admissible) and adjusting for Z would create bias known as "collider bias" or "Berkson's paradox."

In general, confounding can be controlled by adjustment if and only if there is a set of observed covariates that satisfies the Back-Door condition. Moreover, if Z is such a set, then the adjustment formula of Eq. (3) is valid <4,5>. Pearl's do-calculus provide additional conditions under which P(y|do(x)) can be estimated, not necessarily by adjustment.[7]

History[edit]

According to Morabia (2011),[8] the word derives from the Medieval Latin verb "confudere", which meant "mixing", and was probably chosen to represent the confusion (from Latin: con=with + fusus=mix or fuse together) between the cause one wishes to assess and other causes that may affect the outcome and thus confuse, or stand in the way of the desired assessment. Fisher used the word "confounding" in his 1935 book "The Design of Experiments"[9] to denote any source of error in his ideal of randomized experiment. According to Vandenbroucke (2004)[10] it was Kish[11] who used the word "confounding" in the modern sense of the word, to mean "incomparability" of two or more groups (e.g., exposed and unexposed) in an observational study.

Formal conditions defining what makes certain groups "comparable" and others "incomparable" were later developed in epidemiology by Greenland and Robins (1986)[12] using the counterfactual language of Neyman (1935)[13] and Rubin (1974).[14] These were later supplemented by graphical criteria such as the Back-Door condition (Pearl 1993; Greenland, Pearl and Robins, 1999).[3][4]

Graphical criteria were shown to be formally equivalent to the counterfactual definition,[15] but more transparent to researchers relying on process models.

Types[edit]

In the case of risk assessments evaluating the magnitude and nature of risk to humanhealth, it is important to control for confounding to isolate the effect of a particular hazard such as a food additive, pesticide, or new drug. For prospective studies, it is difficult to recruit and screen for volunteers with the same background (age, diet, education, geography, etc.), and in historical studies, there can be similar variability. Due to the inability to control for variability of volunteers and human studies, confounding is a particular challenge. For these reasons, experiments offer a way to avoid most forms of confounding.

In some disciplines, confounding is categorized into different types. In epidemiology, one type is "confounding by indication",[16] which relates to confounding from observational studies. Because prognostic factors may influence treatment decisions (and bias estimates of treatment effects), controlling for known prognostic factors may reduce this problem, but it is always possible that a forgotten or unknown factor was not included or that factors interact complexly. Confounding by indication has been described as the most important limitation of observational studies. Randomized trials are not affected by confounding by indication due to random assignment.

Confounding variables may also be categorised according to their source. The choice of measurement instrument (operational confound), situational characteristics (procedural confound), or inter-individual differences (person confound).

  • An operational confounding can occur in both experimental and non-experimental research designs. This type of confounding occurs when a measure designed to assess a particular construct inadvertently measures something else as well.[17]
  • A procedural confounding can occur in a laboratory experiment or a quasi-experiment. This type of confound occurs when the researcher mistakenly allows another variable to change along with the manipulated independent variable.[17]
  • A person confounding occurs when two or more groups of units are analyzed together (e.g., workers from different occupations), despite varying according to one or more other (observed or unobserved) characteristics (e.g., gender).[18]

Examples[edit]

In another concrete example, say one is studying the relation between birth order (1st child, 2nd child, etc.) and the presence of Down's Syndrome in the child. In this scenario, maternal age would be a confounding variable:

  1. Higher maternal age is directly associated with Down's Syndrome in the child
  2. Higher maternal age is directly associated with Down's Syndrome, regardless of birth order (a mother having her 1st vs 3rd child at age 50 confers the same risk)
  3. Maternal age is directly associated with birth order (the 2nd child, except in the case of twins, is born when the mother is older than she was for the birth of the 1st child)
  4. Maternal age is not a consequence of birth order (having a 2nd child does not change the mother's age)

In risk assessments, factors such as age, gender, and educational levels often affect health status and so should be controlled. Beyond these factors, researchers may not consider or have access to data on other causal factors. An example is on the study of smoking tobacco on human health. Smoking, drinking alcohol, and diet are lifestyle activities that are related. A risk assessment that looks at the effects of smoking but does not control for alcohol consumption or diet may overestimate the risk of smoking.[19] Smoking and confounding are reviewed in occupational risk assessments such as the safety of coal mining.[20] When there is not a large sample population of non-smokers or non-drinkers in a particular occupation, the risk assessment may be biased towards finding a negative effect on health.

Decreasing the potential for confounding[edit]

A reduction in the potential for the occurrence and effect of confounding factors can be obtained by increasing the types and numbers of comparisons performed in an analysis. If measures or manipulations of core constructs are confounded (i.e. operational or procedural confounds exist), subgroup analysis may not reveal problems in the analysis. Additionally, increasing the number of comparisons can create other problems (see multiple comparisons).

Peer review is a process that can assist in reducing instances of confounding, either before study implementation or after analysis has occurred. Peer review relies on collective expertise within a discipline to identify potential weaknesses in study design and analysis, including ways in which results may depend on confounding. Similarly, replication can test for the robustness of findings from one study under alternative study conditions or alternative analyses (e.g., controlling for potential confounds not identified in the initial study).

Confounding effects may be less likely to occur and act similarly at multiple times and locations.[citation needed] In selecting study sites, the environment can be characterized in detail at the study sites to ensure sites are ecologically similar and therefore less likely to have confounding variables. Lastly, the relationship between the environmental variables that possibly confound the analysis and the measured parameters can be studied. The information pertaining to environmental variables can then be used in site-specific models to identify residual variance that may be due to real effects.[21]

Depending on the type of study design in place, there are various ways to modify that design to actively exclude or control confounding variables:[22]

  • Case-control studies assign confounders to both groups, cases and controls, equally. For example, if somebody wanted to study the cause of myocardial infarct and thinks that the age is a probable confounding variable, each 67-year-old infarct patient will be matched with a healthy 67-year-old "control" person. In case-control studies, matched variables most often are the age and sex. Drawback: Case-control studies are feasible only when it is easy to find controls, i.e. persons whose status vis-à-vis all known potential confounding factors is the same as that of the case's patient: Suppose a case-control study attempts to find the cause of a given disease in a person who is 1) 45 years old, 2) African-American, 3) from Alaska, 4) an avid football player, 5) vegetarian, and 6) working in education. A theoretically perfect control would be a person who, in addition to not having the disease being investigated, matches all these characteristics and has no diseases that the patient does not also have—but finding such a control would be an enormous task.
  • Cohort studies: A degree of matching is also possible and it is often done by only admitting certain age groups or a certain sex into the study population, creating a cohort of people who share similar characteristics and thus all cohorts are comparable in regard to the possible confounding variable. For example, if age and sex are thought to be confounders, only 40 to 50 years old males would be involved in a cohort study that would assess the myocardial infarct risk in cohorts that either are physically active or inactive. Drawback: In cohort studies, the overexclusion of input data may lead researchers to define too narrowly the set of similarly situated persons for whom they claim the study to be useful, such that other persons to whom the causal relationship does in fact apply may lose the opportunity to benefit from the study's recommendations. Similarly, "over-stratification" of input data within a study may reduce the sample size in a given stratum to the point where generalizations drawn by observing the members of that stratum alone are not statistically significant.
  • Double blinding: conceals from the trial population and the observers the experiment group membership of the participants. By preventing the participants from knowing if they are receiving treatment or not, the placebo effect should be the same for the control and treatment groups. By preventing the observers from knowing of their membership, there should be no bias from researchers treating the groups differently or from interpreting the outcomes differently.
  • Randomized controlled trial: A method where the study population is divided randomly in order to mitigate the chances of self-selection by participants or bias by the study designers. Before the experiment begins, the testers will assign the members of the participant pool to their groups (control, intervention, parallel), using a randomization process such as the use of a random number generator. For example, in a study on the effects of exercise, the conclusions would be less valid if participants were given a choice if they wanted to belong to the control group which would not exercise or the intervention group which would be willing to take part in an exercise program. The study would then capture other variables besides exercise, such as pre-experiment health levels and motivation to adopt healthy activities. From the observer’s side, the experimenter may choose candidates who are more likely to show the results the study wants to see or may interpret subjective results (more energetic, positive attitude) in a way favorable to their desires.
  • Stratification: As in the example above, physical activity is thought to be a behaviour that protects from myocardial infarct; and age is assumed to be a possible confounder. The data sampled is then stratified by age group – this means that the association between activity and infarct would be analyzed per each age group. If the different age groups (or age strata) yield much different risk ratios, age must be viewed as a confounding variable. There exist statistical tools, among them Mantel–Haenszel methods, that account for stratification of data sets.
  • Controlling for confounding by measuring the known confounders and including them as covariates is multivariable analysis such as regression analysis. Multivariate analyses reveal much less information about the strength or polarity of the confounding variable than do stratification methods. For example, if multivariate analysis controls for antidepressant, and it does not stratify antidepressants for TCA and SSRI, then it will ignore that these two classes of antidepressant have opposite effects on myocardial infarction, and one is much stronger than the other.

All these methods have their drawbacks:

  1. The best available defense against the possibility of spurious results due to confounding is often to dispense with efforts at stratification and instead conduct a randomized study of a sufficiently large sample taken as a whole, such that all potential confounding variables (known and unknown) will be distributed by chance across all study groups and hence will be uncorrelated with the binary variable for inclusion/exclusion in any group.
  2. Ethical considerations: In double blind and randomized controlled trials, participants are not aware that they are recipients of sham treatments and may be denied effective treatments.[23] There is resistance to randomized controlled trials in surgery because it is argued that patients only agree to invasive surgery (which carry real medical risks) under the understanding that they are receiving treatment. Although this is a very real ethical concern, it is not however a complete account of the situation. For surgeries that are currently being performed regularly, but for which we have no concrete evidence of a genuine effect, surely it is unethical to continue without conducting sham control studies? In such a circumstance, thousands if not millions of people are going to continue to be exposed to the very real risks of surgery yet these treatments may possibly offer no discernible benefit. It is only via the use of sham-surgery as controls that medical science can determine whether a surgical procedure is efficacious or not. Arguably then, given that there are known risks associated with medical operations, it is extremely unethical to allow unverified surgeries to be conducted ad infinitum into the future. Yes it is undeniable that there are risks to the research participants in placebo-controlled studies if they receive the sham treatment—but those who receive the supposed "treatment" are exposed to the same risks and possibly for no gain. It is very obvious that the potential benefits to society from desisting with useless surgeries could be immense (and these benefits especially accrue to future sufferers of the same condition who might otherwise receive ineffective interventions and be exposed to an unnecessary medical risks). These potential benefits cannot simply be ignored if we are going to act ethically. Placebo-controlled studies offer the greatest clarification of what works and what does not and the ethical concerns raised by the resisters can be minimized by informing all research participants at study intake that they "may be" assigned to placebo group. To allow patients to volunteer when fully informed in this manner is to allow them to choose to benevolently gift the rest of us an advancement in scientific knowledge. It is not at all ethically obvious that the sufferers of medical conditions should be denied the opportunity to act as philanthropists by being denied the right to participate in such research. Surely it is their informed choice?

See also[edit]

References[edit]

  1. ^Pearl, J., (2009). Simpson's Paradox, Confounding, and Collapsibility In Causality: Models, Reasoning and Inference (2nd ed.). New York : Cambridge University Press.
  2. ^VanderWeele, T.J.; Shpitser, I. (2013). "On the definition of a confounder". Annals of Statistics. 41: 196–220. doi:10.1214/12-aos1058. 
  3. ^ abGreenland, S.; Robins, J. M.; Pearl, J. (1999). "Confounding and Collapsibility in Causal Inference". Statistical Science. 14 (1): 29–46. doi:10.1214/ss/1009211805. 
  4. ^ abPearl, J., (1993). "Aspects of Graphical Models Connected With Causality," In Proceedings of the 49th Session of the International Statistical Science Institute, pp. 391 - 401.
  5. ^Pearl, J. (2009). Causal Diagrams and the Identification of Causal Effects In Causality: Models, Reasoning and Inference (2nd ed.). New York, NY, USA: Cambridge University Press.
  6. ^Lee, P. H. (2014). Should We Adjust for a Confounder if Empirical and Theoretical Criteria Yield Contradictory Results? A Simulation Study. Sci Rep. 4. p. 6085. doi:10.1038/srep06085. 
  7. ^Shpitser, I.; Pearl, J. (2008). "Complete identification methods for the causal hierarchy". The Journal of Machine Learning Research. 9: 1941–1979. 
  8. ^Morabia, A (2011). "History of the modern epidemiological concept of confounding". Journal of epidemiology and community health. 65 (4): 297–300. doi:10.1136/jech.2010.112565. 
  9. ^Fisher, R. A. (1935). The design of experiments (pp. 114-145).
  10. ^Vandenbroucke, J. P. (2004). "The history of confounding". Soz Praventiv Med. 47 (4): 216–224. 
  11. ^Kish, L (1959). "Some statistical problems in research design". Am Sociol. 26: 328–338. 
  12. ^Greenland, S.; Robins, J. M. (1986). "Identifiability, exchangeability, and epidemiological confounding". International Journal of Epidemiology. 15 (3): 413–419. doi:10.1093/ije/15.3.413. 
  13. ^Neyman, J., with cooperation of K. Iwaskiewics and St. Kolodziejczyk (1935). Statistical problems in agricultural experimentation (with discussion). Suppl J Roy Statist Soc Ser B 2 107-180.
  14. ^Rubin, D. B. (1974). "Estimating causal effects of treatments in randomized and nonrandomized studies". J. Educational Psychology. 66: 688–701. doi:10.1037/h0037350. 
  15. ^Pearl, J., (2009). Causality: Models, Reasoning and Inference (2nd ed.). New York, NY, USA: Cambridge University Press.
  16. ^Johnston, S. C. (2001). Identifying Confounding by Indication through Blinded Prospective Review. Am J Epidemiol. 154. pp. 276–284. doi:10.1093/aje/154.3.276. 
  17. ^ abPelham, Brett (2006). Conducting Research in Psychology. Belmont: Wadsworth. ISBN 0-534-53294-2. 
  18. ^Steg, L.; Buunk, A. P.; Rothengatter, T. (2008). "Chapter 4". Applied Social Psychology: Understanding and managing social problems. Cambridge, UK: Cambridge University Press. 
  19. ^Tjønneland, Anne; Morten Grønbæk; Connie Stripp; Kim Overvad (January 1999). "Wine intake and diet in a random sample of 48763 Danish men and women". American Society for Nutrition American Journal of Clinical Nutrition. 69 (1): 49–54. 
  20. ^Axelson, O (1989). "Confounding from smoking in occupational epidemiology". British Journal of Industrial Medicine. 46: 505–07. doi:10.1136/oem.46.8.505. 
  21. ^Calow, Peter P. (2009) Handbook of Environmental Risk Assessment and Management, Wiley
  22. ^Mayrent, Sherry L (1987). Epidemiology in Medicine. Lippincott Williams & Wilkins. ISBN 0-316-35636-0. 
  23. ^Emanuel, Ezekiel J; Miller, Franklin G (Sep 20, 2001). "The Ethics of Placebo-Controlled Trials—A Middle Ground". New England Journal of Medicine. 345 (12): 915–9. doi:10.1056/nejm200109203451211. PMID 11565527. 

Further reading[edit]

This textbook has a nice overview of confounding factors and how to account for them in design of experiments:

  • Montgomery, D. C. (2001). "Blocking and Confounding in the Factorial Design". Design and Analysis of Experiments (Fifth ed.). Wiley. pp. 287–302. 

External links[edit]

These sites contain descriptions or examples of confounding variables:

Illustration of a simple confoundfactor. In other words, Z is the cause of X and Y.
Categories: 1

0 Replies to “Synonym Confounding Variables Assignment”

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *