Get Adobe Flash player

Main Menu

Epidemiology

Health is everybody's natural concern, and an everyday theme in the media. Outbreaks of disease such as the most recent influenza, occurring in many countries at the same time, make front-page news. Beyond epidemics, novel findings on dangerous pollutants in the environment, substances in food which prevent cancer, genes predisposing to disease or drugs promising to wipe them out, are reported regularly. Their actual relevance for human health depends crucially on the accumulation of evidence from studies, guided by the principles of epidemiology that directly observe and evaluate what happens in human populations and groups.

These studies combine two features. They explore health and disease with the instruments of medical research, ranging from records of medical histories to measures of height, weight, blood pressure to a wide variety of diagnostic tests and procedures. At the same time they involve individuals living in society exposed to a multitude of influences, and they cannot be conducted in the isolated and fully controlled conditions of laboratory experiments. Their design, conduct and analysis require, instead, the methods of statistics and of social sciences such as demography, the quantitative study of human populations. Without a clear understanding of this composite nature of epidemiology and of its reasoning in terms of probability and statistics it is hard to appreciate the strengths and weaknesses of the scientific evidence relevant to medicine and public health that epidemiology keeps producing. It is not only among the general public that a woolly appreciation or even a frank misreading of epidemiology often surfaces, for instance in debates on risks or on the merit, real or imagined, of a disease treatment. In my experience the same may occur with journalists, members of ethics committees, health administrators, health policy makers and even with experts in disciplines other than epidemiology responsible for evaluating and funding research projects.

(1) Epidemiology

WHO responds to Ebola virus disease outbreak in West Africa

Global Alert and Response (GAR) - Ebola virus disease (EVD) WHO

What You Need to Know About the Ebola Virus and Emory University Hospital

EMORY HEALTHCARE

Epi

On 28 February 2003, the French Hospital of Hanoi, Vietnam, a private hospital of fewer than 60 beds, consulted the Hanoi office of the World Health Organization (WHO). A business traveler from Hong Kong had been hospitalized on 26 February for respiratory symptoms resembling influenza that had started three days before. The WHO medical officer, Dr Carlo Urbani, an infectious diseases epidemiologist and a previous member of Medecins sans Frontieres, answered the call. Within days, in the course of which three more people fell ill with the same symptoms, he recognized the aggressiveness and the highly contagious nature of the disease. It looked like influenza but it wasn't. Early in March the first patient died, while similar cases started to show up in Hong Kong and elsewhere. Dr Urbani courageously persisted working in what he knew to be a highly hazardous environment. After launching a worldwide alert via the WHO surveillance network, he fell ill while travelling to Bangkok and died on 29 March. A run of new cases, some fatal, was now occurring not only among the staff of the French Hospital, but in Hong Kong, Taiwan, Singapore, mainland China, and Canada. Public health services were confronted with two related tasks: to build an emergency worldwide net of containment, while investigating the ways in which the contagion spread in order to pinpoint its origin and to discover how the responsible agent, most probably a micro-organism, was propagated. It took four months to identify the culprit of the new disease as a virus of the corona-virus family that had jumped to infect humans from wild small animals handled and consumed as food in the Guangdong province of China. By July 2003, the worldwide propagation of the virus, occurring essentially via infected air travelers, was blocked. The outbreak of the new disease, labeled SARS (Severe Acute Respiratory Syndrome), stopped at some 8,000 cases and 800 deaths. The toll would have been much heavier were it not for a remarkable international collaboration to control the spread of the virus through isolation of cases and control of wildlife markets. Epidemiology was at the heart of this effort, combining investigations in the populations hit by SARS with laboratory studies that provided the knowledge required for the disease-control interventions.

Epidemiology owes its name to `epidemic', derived from the Greek epi (on) and demos (population). Epidemics like SARS that strike as unusual appearances of a disease in a population require immediate investigation, but essentially the same investigative approach applies to diseases in general, whether unusual in type or frequency or present all the time in a population in an `endemic' form. In fact, the same methods are used to study normal physiological events such as reproduction and pregnancy, and physical and mental growth, in populations. Put concisely, epidemiology is the study of health and disease in populations.

The population aspect is the distinctive trait of epidemiology, while health and disease are investigated at other levels as well. In fact, when `medicine' is referred to, without specification, one thinks spontaneously of clinical medicine that deals with health and disease in individuals. We may also imagine laboratory scientists carrying out biological experiments, the results of which may hopefully be translated into diagnostic or treatment innovations in clinical medicine. By contrast, the population dimension of health and disease, and with it epidemiology, is less prominent in the minds of most people. In the past, introduced to someone as an epidemiologist, I was not infrequently greeted with the remark `I see you are a specialist treating skin diseases'. (Clearly the person thought of some fancy `epidemiology', alias dermatology. Now I introduce myself as a public health physician, which works much better.)

A flashback into history

Clear antecedents of contemporary epidemiology can be traced back more than 2,000 years. The writings of the great Greek physician Hippocrates (c. 470 to c. 400 ac) provide not only the first known descriptions, accurate and complete, of diseases such as tetanus, typhus, and phthisis (now tuberculosis of the lung), but also show an extraordinarily perceptive approach to the causes of diseases. Like a modern epidemiologist, Hippocrates does not confine his view of medicine and disease to his individual patients but sees health and disease as dependent on a broad context of environmental and lifestyle factors.

According to Hippocrates:

Whoever wishes to investigate medicine properly should proceed thus: in the first place to consider the seasons of the year, and what effects each of them produces. Then the winds, the hot and the cold, especially such as are common to all countries, and then such as are peculiar to each locality. In the same manner, when one comes into a city to which he is a stranger, he should consider its situation, how it lies as to the winds and the rising of the sun; for its influence is not the same whether it lies to the north or to the south, to the rising or to the setting of the sun. One should consider most attentively the waters which the inhabitants use, whether they be marshy and soft, or hard and running from elevated and rocky situations, and then if saltish and unfit for cooking; and on the ground, whether it be naked and deficient in water, or wooded and well watered, and whether it lies in a hollow, confined situation, or it is elevated and cold; and the mode in which the inhabitants live, and what are their pursuits, whether they are fond of drinking and eating to excess, and given to indolence, or are fond of exercise and labour.

Hippocrates, On Airs, Waters and Places 

Many centuries would elapse, however, before epidemiology could move from perceptive observations and insights to a quantitative description and analysis of diseases in populations. The necessary premise was the revolution in science ushered in by Galileo Galilei (1564-1642), who for the first time systematically combined observation and measurement of natural phenomena with experiments designed to explore the underlying regulating laws, expressible in mathematical form (for example, the law of acceleration of falling bodies). The work of John Graunt (1620-74), a junior contemporary of Galilei, is a remarkable example of the general intellectual climate promoting accurate collection and quantitative analyses of data on natural phenomena. In his Natural and Political Observations Upon the Bills of Mortality of London, Graunt uses simple (by our standards) but rigorous mathematical methods to analyse mortality in the whole population, including comparisons between men and women and by type of diseases (acute or chronic). Later progress in epidemiology was made possible by two developments. First, the expansion in collection of data on the size and structure of populations by age and sex, and on vital events such as births and deaths; and second, advances in mathematical tools dealing with chance and probabilities, initially arising out of card and dice games, which were soon seen to be equally applicable to natural events like births and deaths.

By the early 19th century, most of the principles and ideas guiding today's epidemiology had already been established, as even a cursory look at the subsequent history shows.

In France, Pierre-Charles Alexandre Louis championed the fundamental principle that the effect of any potentially beneficial treatment, or of any toxic substance, can only be assessed by a comparison of closely similar subjects receiving and not receiving it. He used his `numerical method' to produce statistical evidence that the then widespread practice of bloodletting was ineffective or even dangerous when contrasted with no treatment. In London, John Snow's research highlighted the idea that insightful epidemiological analyses of disease occurrence may produce enough knowledge to enable disease-prevention measures, even in ignorance of the specific agents at microscopic level. Snow conducted around the middle of the 19th century brilliant investigations during cholera epidemics that led to the identification of drinking water polluted by sewage as the origin of the disease. This permitted the establishment of hygienic measures to prevent the pollution without knowing the specific noxious element the sewage was carrying. That factor, discovered some 20 years later, turned out to be a bacterium (Vibrio cholerae) excreted in the faeces by cholera patients and propagated via the sewage. In Germany, Rudolf Virchow forcefully promoted during the second part of the century the concept that medicine and public health are not only biological but also applied social sciences. Consistent with this inspiration, his studies ranged from pathology - he is acknowledged as the founder of cellular pathology – to epidemiological investigations backed by sociological enquiries. In the United States, the work of Joseph Goldberger demonstrated that epidemiology is equally well suited to identify infectious and non-infectious agents as possible origins of a disease. In the first three decades of the 20th century, he investigated pellagra, a serious neurological disease endemic in several areas of the Americas and Europe, reaching the conclusion that it was due not to an infectious agent, as most then believed, but to poor diet, deficient in a vitamin (later chemically identified and named vitamin PP). In the century spanning Louis to Goldberger, and in fact throughout its history up to the present day, epidemiology has received major support from advances in the contiguous field of statistics, a key ingredient of any epidemiological investigation.

Epidemiology today

Today's epidemiology developed particularly during the second half of the last century. By the end of World War II, it became apparent that in most economically advanced countries the burden of non-communicable diseases of unknown origin, such as cancer and cardiovascular disease, was becoming heavier than the load of communicable disease due to micro-organisms and largely controllable through hygiene measures, vaccinations, and treatment with antibiotics. These new circumstances provided a strong impetus for epidemiology to search for the unknown disease origins through new as well as established methods of research which soon came to be used beyond their initial scope in all areas of medicine and public health. This is reflected in the concept of epidemiology as the study of health and disease in populations: [Epidemiology is] the study of the occurrence and distribution of health-related states or events in specified populations, including the study of the determinants influencing such states, and the application of this knowledge to control the health problems.

M. Porta, A Dictionary of Epidemiology

All aspects of health when studied at the level of population are the proper domain of epidemiology, which covers not only the description of how diseases and, more generally, health-related conditions occur in the population, but also the search for the factors, as a rule multiple, at their origin. This investigative activity is sustained by scientific curiosity but is firmly directed towards an applied objective: the prevention and treatment of disease and promotion of health. A fascinating and challenging feature of epidemiology is that it explores health and disease in connection with factors which, to take heart attacks as an example, span from the level of the molecule, say blood cholesterol, to the level of society, say loss of employment. This broad perspective makes epidemiology at the same time a biomedical and a social science. Epidemiological studies include both routine applications of epidemiological methods, for example in surveillance of communicable diseases or in monitoring of hospital admissions and discharges, and research investigations designed to generate new knowledge of general relevance.

There may be overlaps and transitions between these two types of studies. When routine surveillance detects an outbreak of a previously unknown disease, like SARS, subsequent investigations produce factual knowledge that is at the same time useful for the local and practical purpose of controlling the disease and for the general scientific purpose of describing the new disease and the factors at its origin. Epidemiology fulfils the same diagnostic functions for the health of a community as a doctor's consultation does for the individual.

Within epidemiology a clear distinction must be made between observational and intervention, or experimental, studies. Experimental studies are dominant in the biomedical sciences. For example, scientists working in laboratories intervene all the time on whole animals, isolated organs, and cell cultures by administering drugs or toxic chemicals to study their effects. By contrast, within epidemiology, observational studies are by far the most common. Epidemiologists observe what happens in a group of people, record health-related events, ask questions, take measurements of the body or on blood specimens, but do not intervene actively in the lives or the environments of the subjects under study. Intervention studies, for example trials of new vaccines in the population, are an essential but smaller component of epidemiology, representing no more than one-fifth to one-tenth of all epidemiological studies in healthy populations. In populations of patients, however, trials of treatments, from drugs to surgery, are most common. All kinds of studies, whether routine or for research, observational or experimental, stand with their own particularities on the common basis of the epidemiological principles.

Five major areas within epidemiology

1. Descriptive epidemiology: describes health and disease and their trends over time in specific populations.

2. Aetiological epidemiology: searches for hazardous or beneficial factors influencing health conditions (e.g. toxic pollutants, inappropriate diet, deadly micro-organisms; beneficial diets, behavioural habits to improve fitness).

3. Evaluative epidemiology: evaluates the effects of preventive interventions; quantitatively estimates risks of specific diseases for persons exposed to hazardous factors.

4. Health services epidemiology: describes and analyses the work of health services.

5. Clinical epidemiology: describes the natural course of a disease in a patient population and evaluates the effects of diagnostic procedures and of treatments.

Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity. Constitution of the World Health Organization, 7 April 1948 One may wonder whether the founding fathers, who in the aftermath of World War II inscribed the definition of health in the World Health Organization (WHO) constitution, unconsciously had in mind happiness rather than health, although even happiness, a changing and intermittent human experience, cannot be accurately described as a heavenly and lasting state of perfect well-being. Abstract as it is, the WHO definition does have the merit of stressing the relevance of the psychological and social dimensions, beyond those purely physiological, of health and disease (social aspects incorporated in a recent WHO classification of disabilities and social determinants of health have been a central theme for the organization in recent years). Even today, for the majority of humankind the basic objective of achieving absence of disease or infirmity, as far as may be possible by current preventive and therapeutic means remains unattainable, nor is it clear when it may be attained. Hence measuring health starting from its negative, the presence of disease, is not only technically easier but also makes practical sense.

Defining disease

For the purpose of epidemiological study, a disease can be defined either by creating a definition and regarding as cases the subjects that fit it or by accepting as cases those subjects that have been so diagnosed by a doctor. In an epidemiological survey of diabetes it may be decided that a study team directly examines a fraction of all adults in a town and regards as diabetic those people who satisfy a pre-fixed set of diagnostic criteria. Alternatively, and much more simply, one can accept as cases the people declared to be diabetic by doctors in the general practices and hospitals of the town. Actually the two approaches may to some extent overlap. When carrying out a direct survey, the study team will in fact come across some subjects already known to be diabetic and for whom further tests may be deemed unnecessary.

Accepting existing diagnoses may, however, result in data perturbed by differences in disease definition and diagnostic practice between doctors in the area, a drawback not shared by a direct full-blown epidemiological survey carried out with a fixed definition and uniform procedures. An intermediate solution between a more reliable but cumbersome survey and a simpler but less reliable face-value acceptance of existing diagnoses may consist of confirming or rejecting the diagnosis only after thoroughly reviewing the medical records in the physician and hospital files. Still, this procedure would not capture, as an epidemiological survey would, the not infrequent cases of diabetes present in the population that do not show up in the files because those affected have no symptoms. Any of these approaches to the definition of diabetes and case identification may be employed to produce figures on the frequency of diabetes in a country, a region, or a particular group of people.

To complicate matters, many disease definitions have changed and continue to change, sometimes even in a major way, with advances in biological and medical knowledge. In the case of diabetes, the threshold levels for sugar in the blood that define diabetes were modified ten years ago taking into account the results of several studies showing that levels previously regarded as `normal' and safe were in fact associated with an increased frequency of complications. For heart attacks, the different types of myocardial infarction are currently redefined including among the criteria the detection in the blood of some hitherto unmeasurable proteins released by the damaged heart cells. Epidemiology itself may contribute to defining or redefining diseases. Rather than committing himself to a disease definition, the epidemiologist may simply measure individual symptoms such as insomnia, headache, fatigue, tremors, or nausea to see whether they occur jointly, forming a 'syndrome' (a cluster of symptoms) in individuals with special personal traits or in particular settings. This paves the way to the investigation of the physiological mechanisms, the external circumstances, and the progress in time of the syndrome; once these elements become clear, the definition of a new disease or a special form of a disease already known may be consolidated.

Share

(2) Epidemiology

Ebola Outbreak: Epidemic 'Out of Control'

'Patient Zero' Of Deadly Ebola Outbreak In Africa Was Likely A 2-Year-Old Boy In Guinea, Researchers Say

Where Is The Ebola Outbreak Now? Updated Map Of Ebola Virus Outbreak Shows Spread Of Cases Outside West Africa

2014 Ebola Outbreak in West Africa (Guinea, Liberia, Sierra Leone, and Nigeria)

2014 EBOLA OUTBREAK

DIABETES - WHO

糖尿病

السكري

Diabète

Диабет

Diabetes

Epi

Defining and diagnosing diabetes

Diabetes 'mellitus' (honey-sweet) takes its name from the loss of sugar that makes urine sweet. It occurs in two forms. Type 1, or juvenile, diabetes starts most commonly before the age of 30, while type 2, or adult, diabetes mostly begins after the age of 30. Both types, which differ in their underlying lesions and response to treatment, have in common an impairment of the metabolism of sugars leading to abnormal levels of glucose in the blood.

Among the consequences are the passage of glucose in the urine and a high level of glucose in the body tissues, inducing a series of complications in the cardiovascular and nervous systems. It is the progression of these complications that makes diabetes a potentially very serious disease.

The diagnosis of diabetes may be suspected by the presence of symptoms such as excessive urination and thirst, recurrent infections, and unexplained weight loss (particularly in type 1, or from previous overweight in type 2). It becomes established if the level of glucose in plasma (the liquid fraction of blood) in a person fasting for 8 to 12 hours equals or exceeds 126 milligrams per deciliter. Many diabetes cases, particularly of type 2, present no symptoms for several years and are recognized only through a routine blood test done for other reasons: in this situation, the diagnosis is regarded as established only if a repeated blood test in a fasting condition confirms the result of the first.

Within an epidemiological survey carried out for research purposes, the best single diagnostic tool is to have the fasting subjects drink a concentrated solution of 75 grams of glucose and measure the glucose plasma level after 2 hours. A level of 200 milligrams per deciliter or above is diagnostic for diabetes, while values between 140 and 199 milligrams indicate an impaired regulation of glucose, a condition that increases the likelihood of developing diabetes.

Reference disease definitions are found in medical textbooks and collections of definitions have been developed, the best known and most widely used being the International Classification of Diseases and Related Health Problems (abridged as ICD) of the World Health Organization. As the name indicates, ICD is not a mere collection of disease definitions but a `nosological' (from the Greek nosos, disease) system of ordering and grouping diseases. The grouping scheme has evolved out of that proposed in the early phases of international discussions on disease classification some 150 years ago. It reflects the same compromise between two main criteria of classification, one based on the site of the disease in the body and one based on the nature and origin of the disease. It covers five broad areas: communicable diseases of infectious origin; constitutional or general diseases (blood diseases, metabolic diseases like diabetes, cancers, mental disorders); diseases of specific organs or systems (cardiovascular, digestive, etc.); diseases related to pregnancy, birth, and development; diseases arising from injuries and poisons.

The first edition of the ICD was adopted in 1900 during an international conference in Paris at which 26 countries were represented. Revisions took place at 10-yearly intervals, and in 1948 the newly established World Health Organization took charge of the 6th revision and became responsible for all subsequent developments. Currently the 10th revision (ICD-10), which has been updated annually since 1996 rather than being completely revised, is in use.

ICD-10 is organized into 22 disease categories, each being denoted by a three-character code composed of one letter and two numbers. The different types of diabetes mellitus have codes E10 to E14 and are included in category IV, `Endocrine, nutritional, and metabolic diseases'. Myocardial infarction is coded 121, within the category IX, `Diseases of the circulatory system'; a fourth digit, to be used optionally, makes it possible to specify the part of the heart wall affected by the infarction. The three-character code is used in all countries that keep some system of health statistics to code the disease regarded as the cause of death. In many countries, it is also used in its standard form or with extensions and modifications for coding diagnoses in hospital discharge or other health services' records. Tables allowing conversion of codes between different versions of ICD have been developed. Death is an unequivocal event and mortality statistics are an established yardstick for the description of the health conditions of a population.

Causes of death, as recorded in death certificates that use ICD-10, are subject to the problems of diagnosis already mentioned and require translating a doctor's diagnosis into ICD codes. The internationally adopted death certificate provides a simple standard format to facilitate the task of the doctor in identifying the `underlying' cause of death among the several ailments that may affect a patient. A full set of rules, today often in computerized form, is then available to the coders who have to convert the death certificate information into ICD codes. Notwithstanding these procedures, the accuracy of coded causes of death is still imperfect and variable even in developed countries. In developing countries, where more than three quarters of the world population live and die, reaching a minimally acceptable level of accuracy in the absence of adequate medical services may demand a'verbal autopsy', i.e. a systematic retrospective enquiry of family members about the symptoms of illness prior to death. These limitations need to be kept in mind when dealing with mortality statistics, and more generally with statistics based on disease diagnoses. Yet as the British medical statistician Major Greenwood remarked: `The scientific purist, who will wait for medical statistics until they are nosologically exact, is no wiser than Horace's rustic waiting for the river to flow away.'

Measuring disease

Three elements are always needed to measure the occurrence of a disease in a population or in a group within the population: the number of cases of the disease, the number of people in the population, and an indication of time. The finding that in the adult (age 15 and over) male population of Flower City, 5,875 cases of type 2 diabetes mellitus have been observed has a very different significance in a male population of Flower City of 10,000 than in 100,000 or 1,000,000 men. To make this relation explicit, a first measure of occurrence can be computed, the prevalence proportion' or simply prevalence!:

Prevalence proportion = Number of diseased persons/Number of persons in the population

If the number of persons (men) in the population is actually 45,193, the prevalence proportion is: 5,875 / 45,193 = 0.13. In percentage form, 13% of all adult males in Flower City are diabetic. Now, or some time in the past? If the count was made in 1908 or 1921, it may be only of historical interest; if in a recent year, it is of current interest and carries practical implications. The point in time at which the census of the cases and of the population was taken needs to be specified, say 1 January 2014. This completes our measure and makes it unambiguous as an instant picture of diabetes in the Flower City male population. The information is useful, for instance, for the planning of health services. In general prevalence figures for all health conditions and in the different sections of the population, males and females, young or old, are required to plan an adequate provision of health services for diagnosis and treatment. It takes just a little reflection to realize that the prevalence of diabetes reflects in fact the balance between two opposite processes: the appearance of new cases and the disappearance of existing cases who die (or if they were completely cured, which happens for some diseases, such as pneumonia, but not for established diabetes). Both processes develop in time, hence time should now be taken not as a simple indication of the point at which the measurement was made (as for prevalence) but as time intervals within which new cases and deaths occur.

Before a soccer match, the referee tosses a coin to assign 'at random' a side to each team in the playing field. The procedure is fair to the two teams by assuming that a perfect coin will not fall preferentially on heads or tails. The results of an experiment in which the results of 10,000 tosses were recorded show that this is a tenable assumption. As portrayed in the figure, the proportion of heads varies widely when the number of spins is small, but the variation decreases and the proportion becomes gradually more stable as the number of tosses increases. The coin does not compensate in some mysterious way for any series of consecutive heads that may have occurred with an equal number of tails: simply, any imbalance between heads and tails will be diluted as the number of tosses increases and the value of the proportion will tend to stabilize more and more closely around the value 0.50 (or if you prefer 50%). This value - hypothetical, as in principle the series of tosses should carry on indefinitely - can be taken as the probability of a head. In general, the probability of an event is the proportion of occasions the event occurs in an indefinitely long series of occasions'. Being a proportion, it ranges between zero and one, or, in percentages, from 0 to 100%.

The notion of risk relates probability to time. 'Risk is the probability of an event in a specified interval of time', for instance of breaking a leg within the next five years. Risk should not be confused, as often happens in common parlance, with a risk factor, also called hazard, entailing the risk of some harmful effect: for fractures of the leg bones, skiing is a risk factor. As here defined, risk is simply the probability of any effect, harmful or beneficial, for example recovery from a disease.

Disease risk and disease incidence rates

The risk of a disease is the probability that a person becomes diseased during the time of observation:

Risk = Number of persons who become diseased during a time period/Number of persons at the beginning of time period

If in the population of Flower City (no. 45,193), 226 new cases of diabetes were diagnosed in the period between 1 January and 31 December 2013, the risk is 226/45,193 = 0.005 or 5.0 per 1,000 persons. This simple measure provides an estimate of the risk for a male living in Flower City to become diabetic if the population of the town is `closed', with no individuals entering or leaving for whatever reason. Clearly a real natural population is never closed: people die, people move out and come in. Even in an artificially formed population such as a group of people identified for long-term follow-up and study of health, with no further entries permitted into the group, there will be deaths and some people will inevitably become untraceable. In short, risk seems a too crude measure in most circumstances except when the time interval of observation is so short, not a year but a week or a day, that entries into and exits from the population are minimal and can be ignored.

These shortcomings are not shared by a related measure of disease occurrence, the incidence rate, which is of general use but requires a more subtle formulation and the availability of more detailed data than just the number of people present at the beginning of the time of observation and the number of subjects who have developed diabetes by the end of that time. The incidence rate can be regarded as the probability of developing the disease in a time interval so tiny, just an instant, that no two events (death, arrival of an immigrant, new case of a disease, etc.) can take place within it. It is an instantaneous rate of occurrence, called instantaneous death rate when the event is death (the expressive term force of mortality is also used) and instantaneous morbidity rate when the event is the occurrence of a new case of a disease. The incidence rate can be derived as:

Incident rate = Number of persons who become diseased while observed/Sum of individual observation times of all persons.

The `observation period' of a person is the length of time from the start of the observation to the moment he/she develops the disease or dies or is no longer under observation because he/she is lost from sight or the study has come to an end. The individual times of observation are summed and form the denominator of the rate. If, while the population of Flower City was observed during the 365 days between 1 January and 31 December 2013, a subject has migrated out on 31 March, after 90 days, he/she should be counted not as one person but only as 90/365 = 0.25 person years; if somebody died on 29 August, he/she should be counted for 241/365 = 0.66 person-years. The measurement unit person year captures the key concept that each person should count not as one but as an amount equal to the time he or she has been actually exposed to the risk of developing the disease. For the male population of Flower City, the incidence rate of diabetes, properly calculated in this way, turned out to be 5.2 per 1,000 person-years, or - in a less accurate but often-used expression - 5.2 per 1,000 per year. In plain words, some 5 men out of 1,000 become diabetic every year. Time is usually and arbitrarily specified as year, but week or even day may be more convenient when dealing with acute outbreaks of diseases such as influenza or SARS. In these instances, the measurement unit becomes person-week or person-day.

Intuitively, the incidence rate must have a positive relation to the prevalence proportion. More new cases of diabetes feed a higher prevalence of diabetes in the population if the average duration of the disease, which depends on how soon death intervenes after the disease onset, does not change in time.

For the Flower City male population, in stable conditions, it is sufficient to multiply the incidence rate of 5.2 per 1,000 person-years by an average duration of diabetes of roughly 25 years to obtain the prevalence of 13% described before.

A rate, in epidemiology as in all sciences, is a measure incorporating time as the reference. A rate of interest is how much you gain per year out of a capital, a rate of progression in space or velocity (speed) is how long a distance you cover in one minute or one hour. The term `rate' should be confined to this use and not extended generically to other types of ratios. If you hear of `prevalence rate', it is a wrong expression simply meaning prevalence proportion. Crude rates of incidence or mortality refer to a whole population, while specific rates, e.g. age-specific or sex-specific, refer to population subgroups defined by age or by sex.

The arithmetic of incidence rate

The incidence rate is a fundamental measure in epidemiology and demography. When computed for a healthy population, it expresses the probability of new cases of a disease per unit of time. When referring to a population of diseased persons, it expresses the probability of dying, or of recovering, per unit of time.

Imagine that a mini-population of 10 people has been observed for a winter trimester, i.e. 13 weeks after 1 January 2014, in a medical practice. Two new cases of influenza have been diagnosed, Blondine at week 6 and Frank at week 10, giving a risk of influenza of 2/10 = 0.2, or 20% in the trimester. Andrew, a foreign traveller, has come under observation on week 2 and left on week 4; George moved out for his job on week 9; and Ian unfortunately died in an accident in week 11. For simplicity, all events (arrival, departure, death, diagnosis of influenza) are regarded as occurring at the mid-point of a week. Andrew was observed for only 2 weeks, hence his time at risk of developing influenza is 2 weeks; Blondine developed influenza during the 6th week, hence her time at risk is 5.5 weeks, because the subsequent time of observation until week 13 no longer presents risk of influenza from the same strain of virus.

The incidence rate can now be computed as 2 / (2 + 5.5 + 13 + .. . 10.5 + 13) = 2/101 = 0.0198 per person-week or 1.98 per 100 person weeks. 2 persons out of every 100 falling ill in the short interval of one week is a high rate.

Ostensibly, in 52 weeks (a year) 104 persons (2 x 52), out of a total of 100 would fall ill, a plainly absurd result! In fact, a correct calculation, which involves more than simple arithmetic, would show that if this rate had continued for the whole year (which by good fortune does not usually happen with seasonal influenza), only 3 or perhaps 4 of the 10 persons in our mini-population would have escaped it.

Disease causes and exposures

We use the word `cause' frequently in everyday parlance. The concept seems intuitively simple, yet it proves logically problematic and has been subject to continuous debate ever since Greek philosophers, in particular Aristotle, started to define it in the 5th century BC. Causes of disease do not escape this difficulty. When a few hours after a club dinner several members fall sick with gastroenteritis, what is the cause? The dinner, without which the intestinal trouble would have not occurred? The `tiramisu ‘dessert, as only those who ate it fell sick? The bacterium Staphylococcus aureus which, as a subsequent laboratory investigation showed, had found its way into the `tiramisu' through a lapse in hygiene in the kitchen? The toxin produced by the bacterium that attacks the cells of the intestinal lining? The biologically active part of the toxin molecule that binds to some molecules of the cell membrane?

It could be tempting to take the latter as the ultimate, hence `real', cause, but our understanding of the world would fast dissolve if only relationships between molecules could qualify as causal. For instance, it would be impossible to describe and analyse the circulation of the blood in terms of individual molecules. It is only when molecules join to form higher-order, complex structures such as blood cells, arteries, veins, the heart muscle, that new properties emerge permitting explanation of the working of the circulatory system. In fact, in our club dinner example each factor, from the dinner itself

to the active part of the molecule, can be regarded legitimately, at a different level of observation and detail, as a cause. Without any one of them there would have been no gastroenteritis. In general, we can consider as a cause a factor without which an effect, adverse such as disease or favourable like the protection against it, would not have happened.

Most of the epidemiologist's investigative work consists in trying to identify the `factors without which' a disease would or would not arise. In terms of the actual disease measurements, this means to identify factors of any nature - social, biological, chemical, physical – whose presence can be shown to be constantly associated with an increase or a decrease in a disease incidence rate or risk. There are scores of candidates for this role, from stress at work to inherited genes, from fatty foods to physical exercise, from drugs to air pollutants. They can all be designated with the generic label of factor or (in epidemiological jargon) exposure, neutral enough not to prejudice whether the candidate will in the end come out as a cause of disease or not.

Share

(3) Epidemiology

Liberian quarantine centre attack increases fears of Ebola's spread

BREAKING: 3,000 "Ebola Martyrs" Ready To Strike America in biggest "Apocalyptic Attack"

Global Alert and Response (GAR) - Ebola virus disease (EVD)

Haemorrhagic fevers, Viral - Ebola virus

Cholera - World Health Organization

霍乱

الكوليرا

Choléra

Холера

Cólera

Epi

Comparing rates and risks while minimizing biases

The usually long research journey to show that a factor is a cause of disease starts by comparing incidence rates or risks between different groups of people. Noting that the rate of occurrence of type 2 diabetes is higher in a group of overweight people observed for several years than in people of normal weight suggests that excessive weight may be among the determinants of diabetes. Before this suggestion can be transformed into conclusive evidence, two conditions need to be fulfilled: (1) demonstrating that there is an association between the exposure, overweight, and diabetes incidence.

Share

Read more...

(4) Epidemiology

Media centre - WHO news

Global Alert and Response (GAR) - Ebola virus disease - World Health Organization (WHO)

埃博拉病毒病

مرض فيروس إيبولا

Maladie à virus Ebola (EVD)

Болезнь, вызванная вирусом Эбола (ББВЭ)

Enfermedad por el virus del Ebola

Q&A from Emory Healthcare physicians on Ebola

Two Ebola virus-infected patients discharged from Emory University Hospital - Emory News Center

Ebola Patient Revels in ‘Miraculous Day’ as He and Another Exit Hospital

Both Ebola patients released from Emory hospital

Epidemiology2

Negative studies

Not finding an association when it was expected is the other side of the coin to finding an association between an exposure and a disease and going through the punctilious process of establishing whether it is causal. Particularly when the expectation was based on repeated results of previous epidemiological studies or on sound results from laboratory studies, a systematic scrutiny is needed of the reasons why no association turned up in a study. Several reasons fall under the heading of `insufficient': insufficient number of subjects in the study, leading to what is called low `power' to detect an increase in risk associated with the exposure; insufficient intensity or duration of exposure to induce an observable increase in risk, as may happen with pollutants recently introduced into the environment at low concentrations; insufficient period of observation, as effects like cancer usually appear many years after the onset of the exposure; finally, insufficient variation in exposure between the different groups of people to be compared, making it difficult to detect differences in risk between the groups. In addition, confounders and sources of bias, for example losses of records not occurring at random, can not only create spurious associations but also operate in the opposite direction by masking existing associations. The process of understanding why an expected association did not show up is no less lengthy, laborious, and complex than the process of establishing and judging the nature of an association.

Necessary and sufficient causes

'The key point is that even if smoking were to be causally related to any disease it is neither a necessary nor a sufficient cause.' This statement, pronounced in a court by an expert 30 years after the publication of the 'Smoking and Health' report that established tobacco smoking as a cause of several diseases, is at the same time correct and misleading. Correct because, for instance, not all lung cancers occur in smokers (although the great majority do), nor do all smokers develop lung cancer. Misleading because it suggests that the only 'real' causes of diseases are those that are indispensable to produce all cases, or sufficient to trigger the disease every time they are present, or both. Causes that are necessary and sufficient, or just sufficient, are in fact uncommon in nature, being essentially represented by those inherited genes that constantly produce a genetic disease, such as the bleeding anomaly of haemophilia. Necessary causes are common in the field of infectious diseases, where only the presence of a specific micro-organism defines the disease (e.g. whatever the symptoms, there is no case of tuberculosis without the tuberculosis mycobacterium). Outside these domains, the great majority of causes of disease are, like tobacco smoking, neither necessary nor sufficient, yet they increase, often substantially, the probability (risk) of the disease, and removing or neutralizing them is highly beneficial. As an example, overweight and obesity, discussed before, are causes of diabetes but are neither necessary nor sufficient. In epidemiology, the general term determinant is used as synonymous of cause without prejudicing the detailed nature of the cause, whether necessary, sufficient, and broad, like a type of diet or an occupation, or narrow, like a specific vitamin within the diet or a specific exposure to a chemical pollutant within an occupational setting.

In a nutshell, the basic principle is to distinguish clearly between, no evidence of effect', that may often occur because no proper study has been done (whatever the reasons), and 'evidence of no effect' as emerging from adequate studies. The latter is reassuring while the former is simply uninformative. Thus studies in small communities, as often carried out to assuage legitimate concerns about risks from environmental factors, may be capable of excluding large excess risks but incapable of providing information about possible smaller excesses. A related principle is that the finding of a weak effect, for example a minor increase in asthma risk in people exposed to a chemical, cannot be taken as an indication that the chemical is in itself a weak, hence nearly innocuous, agent because the observable effect depends also on the study characteristics and on the dose of the chemical. Again `evidence of a weak effect' cannot be taken automatically as `evidence of a weak toxic agent'.

Individual and population determinants of disease

Recognizing that a factor like tobacco can cause lung cancer hinges on two conditions. First, and rather obviously, on tobacco being actually capable of producing the disease. Second and less obviously, on how much smoking habits vary within the population being studied by the epidemiologist. If everybody smoked exactly 20 cigarettes per day from the ages of 15 to 45, there would be no difference in risk due to tobacco. Tobacco, although the dominant determinant of lung cancer in a population in which everybody smokes, would go completely unrecognized as a cause of the disease. Other factors, such as individual susceptibility, would be the only recognizable determinants by which people in a population with a high and uniform risk due to smoking stand out as being at an even higher risk. The conclusion would be reached that lung cancer is due to individual susceptibility, itself mostly dependent on the genes inherited from the parents: hence lung cancer would come to be regarded as an essentially genetic disease.

This fictional example was proposed in 1985 by Geoffrey Rose in an insightful article ('Sick individuals and sick populations') to clarify the distinction between individual and population determinants of disease.

Population determinants, like the uniform smoking habit of the example, are responsible for the overall disease risk in a population, while individual determinants, like the individual susceptibilities, are responsible for the different risks between individuals or groups of individuals within the population. Studies comparing disease risk in groups within a population are suitable to identify the latter, but recognizing determinants that act essentially at the level of the whole population requires analysing and comparing risks between populations of different regions or countries or in the same populations at distinct times. No new principles need to be introduced to these analyses, but they may prove even more complex than those already outlined for studying associations and judging causality within a population.

Because of their general impact, population determinants are of major importance for health. A striking current case that follows the lines of the imaginary tobacco example is overweight and especially its highest degree, obesity. Excess weight basically originates from an imbalance between too many food calories ingested and too little expenditure of those calories through physical activity. Here excessive calorie consumption is the relevant exposure; if everybody or the great majority of the population is exposed nearly uniformly to some (not necessarily large) excess of calories from childhood, as tends to occur today in many high-income countries, the risk of obesity would be similarly high for everybody in the population. However, as in the smoking example, some people will be at an even higher risk than the average due to individual susceptibility, which becomes the main recognizable determinant of obesity. This is what is happening today as `obesity genes' related to individual susceptibility are discovered, one after the other. They resonate in the media, and even in the scientific press, as the finally (for how long?) found and real causes of obesity. Focusing on genes is scientifically challenging, but it leads us astray if it ignores the main determinant of the overall population risk of obesity, i.e. widespread excessive intake of calories.

Other population determinants have an evident relevance. Polluted waters are a prime scourge in many low-resource countries, causing almost two million deaths worldwide every year. Air pollution affects the health of town dwellers in high- as in middle- and low-income countries. Less evidently, vaccination for a number of diseases such as measles or polio is essentially a population rather than an individual determinant of health and disease.

Vaccination is certainly beneficial to the individual, but for most people the risk of the disease may already be low even without the vaccination.

Vaccination, however, creates a 'herd immunity' or group immunity, whereby the chain of transmission of an infectious disease like measles is interrupted, bringing down to almost nil (or nil) the risk for the totality of the population. Other factors, such as being employed or unemployed, low or high level of education, have been known for quite some time to influence directly the health of individuals. More recent studies, however, show that the level of employment or of schooling in a society also acts indirectly as a population determinant of health. Different types of interventions, targeted on single individuals or collectively on the material or social environment, are required to control individual and population determinants of disease.

Tobacco and health

Research on tobacco and health has been a key stimulus for the development of methods of modern epidemiology, including the principles for identifying causes of disease. The survival studies in smokers versus non-smokers fix two moments in the unfolding of epidemiological research, nearly half a century apart.

There is a striking resemblance between the curves, summarizing how long smokers and non-smokers live, but they reflect fundamentally different statuses of knowledge. In 1938, not much was known epidemiologically and the US male curve solicited research to find out why the survival of smokers appeared to be so much poorer. The curves from the cohort of British doctors instead clearly show the result of all the diseases that in the meantime had been shown by epidemiological studies to be consequences of smoking.

The year 1950 marks a turning point in this research. Three important scientific papers were published in 1950: by Richard Doll and Austin Bradford Hill in the United Kingdom, by Ernest Wynder and Evarts Graham in the United States, and by Morton Levin, Hyman Goldstein, and Paul Gerhardt also in the United States. They are the first rigorous analytical studies of a disease, lung cancer, in relation to tobacco smoking. All three found a much higher frequency of smokers among lung cancer cases than among control subjects.

These findings prompted a rapidly increasing number of epidemiological studies on the relation between smoking and cancers of the lung, other respiratory organs, and other diseases such as chronic bronchitis and myocardial infarction. Soon after their 1950 paper on lung cancer, Doll and Hill, in 1951, enrolled a cohort of some 40,000 British doctors to be followed for several decades recording mortality from different diseases. The choice of doctors had the advantage of involving a population that was rather homogeneous in socio-economic status and not exposed to other airborne toxic agents, unlike workers in many industries. The study stands as a cornerstone in epidemiology and similar follow-up studies on smoking were developed in other populations.

In addition, smoking became a factor to be measured in almost every epidemiological investigation because it could often be a confounder of the effects of other factors. As a result, a mass of information has been accumulating and continues to accumulate to the present day. A 2002 panel of the International Agency for Research on Cancer listed more than 70 main long-term follow-up studies that have investigated smoking-related diseases.

As the survival curve in (a) shows, the point in time at which half (i.e. 50% on the vertical axis) of all subjects alive at the age of 40 are still alive is reached 7.5 years earlier for current smokers than for lifetime non-smokers. In other words, smokers lose on average 7 years of life with respect to never smokers, and those who die between the ages of 35 and 69 lose an average of 22 years. In high-income countries, about one-fifth of all deaths are caused by tobacco smoking, which is responsible worldwide for more than five million deaths every year - more than half of them in middle- and low-income countries. By far the largest proportion of the 1.3 billion smokers in the world today is in these countries, and because the full-blown effects of smoking become manifest after two to three decades of continuing smoking, a huge increase in tobacco-related deaths is to be expected in the future in middle- and low-income countries unless smoking cessation is successfully implemented.

Some projections forecast that in 2020 there will be 9 million deaths due to tobacco, three-quarters of them in middle-and low-income countries. A range of diseases contributes to this toll: cancers of at least 13 organs (including lung, nasal passages, larynx, pharynx, mouth, oesophagus, bladder, and pancreas), cardiovascular diseases, including heart attacks and stroke, and chronic obstructive pulmonary disease ('chronic bronchitis'). These results are not surprising because tobacco smoke contains almost 5,000 identified chemicals, many of them toxic and more than 50 definitely carcinogenic.

Many of these substances are also found in the second-hand smoke that goes into the environment around smokers to which other people, including non-smokers, are passively exposed. Passive smoking is associated with an increase in risk of several diseases in non-smokers ranging from respiratory ailments in children to myocardial infarction. For lung cancer, more than 50 studies support a causal role of passive smoking involving an increase in risk of about 25%.

Tobacco smoking has emerged as the greatest killer in peacetime. All forms of tobacco use, including pipe smoking, chewing, and sniffing, are noxious even if with less pervasive and strong effects than cigarette smoking.

This gloomy picture is brightened by the fact that preventive efforts, involving a combination of measures, educational campaigns, increasing taxation on tobacco products, and prohibition of smoking in public places, have translated into reduction of tobacco use in several high-income countries. Young people have been less prone to take up smoking habits and appreciable numbers of people have stopped smoking. Duration of exposure to tobacco smoke is crucial for risk, hence the later one starts to smoke (short of not starting at all) and the sooner one manages to stop, the better the health prospect. Stopping at any age is beneficial: in people who started to smoke in early youth, stopping at age 30 reduces the risk of developing lung cancer before age 75 to nearly that of non-smokers, and even stopping at age 50 reduces the risk to one-third with respect to continuing smokers.

Share

(5) Epidemiology

Ebola Caused More Freak Outs Than Outbreaks In 2014 (VIDEO)

Ebola: Mapping the outbreak (BBC)

2014 - The Year Ebola Challenged the Humanity

EBOLA-2014-World Health Organization

Poliomyelitis - World Health Organization

Epidemiology5

Observational and experimental studies

The principles of epidemiology hitherto outlined apply to all studies, although the examples and the discussion have focused on observational studies, in which the investigator intervenes only by observing people and recording information at a point in time or during a time period. The great advantage of this type of study is that it can in principle be carried out in all contexts to investigate any health phenomenon. The disadvantage is that all comparisons between rates and risks in different groups of people, for example rates of chronic bronchitis among those exposed to air pollutants in a city and those not exposed, can always be influenced by unknown factors other than the pollutants. The elaborate procedures outlined in the previous chapters are necessary in these observational situations to reach conclusions about the possible causal role of an exposure such as air pollution on a disease such as chronic bronchitis.

Life would be simpler if the epidemiologist could choose, as in a laboratory experiment, which subjects would be assigned polluted and unpolluted air to breathe for several years, making sure in advance that all subjects in the experiment were closely similar in all respects, except for the `treatment', i.e. exposure to different types of air. The simplest and safest device to achieve this similarity would consist in assigning the subjects perfectly at random, by tossing a coin, to breathing polluted or unpolluted air.

The random assignment would act as an insurance against all known and, crucially important, unknown factors that could make the two groups of people different. Clearly this randomized experiment, or randomized controlled trial (RCT), is not feasible, both for ethical and technical reasons. Hence the lesser scope of experimental randomized studies (also generically and less accurately called `intervention studies') in respect to observational studies, notably when an agent, like polluted air, is investigated because of possible adverse effects on health. The elective place of the randomized experiment is in investigating agents which may have beneficial health effects. Promising new drugs are continuously tested on population of patients affected by a great range of diseases, from all kinds of cancer to heart diseases like myocardial infarction and angina pectoris to rheumatic diseases. In addition to these trials of treatment, large randomized experiments are carried out in healthy populations to test preventive interventions. Screening programmes for early diagnosis and treatment of serious conditions such as breast or colon cancer are tested in population randomized experiments and new vaccines - for example, against AIDS - are commonly tested using randomized trials in large populations.

The first vaccine against poliomyelitis

Until the middle of the last century, poliomyelitis, or infantile paralysis, was an infectious disease of the nervous system occurring especially in summer in epidemic waves, irregular in intensity. It affected particularly children and young people, who might experience only a transient fever or instead suffer lifelong flaccid paralysis of the limbs, or die if the nervous centres controlling respiration were attacked by the virus causing the disease.

Three types of the virus had been identified. Some small-scale but unsuccessful attempts with vaccines had been made when, in the early 1950s, a very promising vaccine to be administered by intra-muscular injection was developed by Dr Jonas Salk at the University of Pittsburgh. It consisted of a `killed' virus that had lost its ability to produce the disease while retaining the power to stimulate a protective immunity in the body of the vaccinated subjects.

Before recommending the vaccine for mass administration, sound evidence of its actual efficacy was needed. Just starting to administer the vaccine and seeing whether the incidence rate was declining was not an option as the disease frequency varied too much from one year to the next. It would have been impossible to distinguish a decrease due to the vaccine from spontaneous variation. An additional problem was the difficulty in correctly diagnosing as poliomyelitis, rather than for example `flu', the numerous minor cases which were the main source of the spread by direct contact between people. The decision was then taken to implement a true prevention experiment on primary school children.

Children aged 6 to 9, whose parents had agreed to take part in the study, had to be assigned at random to receive the vaccine or a no-vaccine treatment. They were recruited in 84 counties of 11 states across the whole territory of United States. The number of participants had to be very large, so that even if the vaccine gave only a 50% protection, a difference in the incidence rate between the vaccinated and the unvaccinated group would be detectable with a high degree of confidence. One dose of the vaccine against all three types of the virus had to be administered at the start of the study, a second dose one week later, and a third after five weeks. The no-vaccine treatment consisted of three injections but of an inactive preparation strictly similar in appearance to the vaccine, i.e. a placebo. When feasible, placebo treatments are the best form of comparison for an active medication as it is well established that simply receiving an inactive medication may produce an effect. One may wonder, however, whether ethics committees would today approve injecting children three times with an inactive preparation. The trial was `double blinded' as not only the children (and their families) did not know whether they were receiving the vaccine or the inactive preparation, but their physicians were also unaware of which treatment was administered. In this way, they would not be influenced at all by knowledge of the treatment when, confronted with a suspect case, they would have to decide about a diagnosis of poliomyelitis.

Close to 400,000 children whose parents had given consent were entered into the trial (200,745 vaccinated and 201,229 unvaccinated) out of a total of about 750,000 in the areas where the trial took place. 82 cases of poliomyelitis were observed among the vaccinated children in the six months after vaccination, i.e. a risk of 41 per 100,000 and almost double, 162 cases, were observed among the unvaccinated, a risk of 81 per 100,000. The difference between the two risks, 81-41 = 40 is larger than would be expected by chance if there was no real difference. An even stronger decrease was noted for the more serious form (paralytic) of the disease, with risks of 16 per 100,000 in the vaccinated group and 57 per 100,000 in the non-vaccinated. The conclusion here is much more straightforward than in the case of an observational study. Because the children were assigned at random to vaccination and no-vaccination, the two groups should be closely similar in all respects except for the difference in treatment, which can be confidently regarded as the cause of the decreased rate among the vaccinated children. The trial demonstrated the effectiveness of the vaccine and initiated programmes of systematic vaccination all over the world.

The Salk vaccine is even today considered to be the most effective and safe protection against poliomyelitis. Five key features of randomized controlled trials.  The study design is always based on randomization, usually implemented today by means of computer-generated lists of random numbers. People can just be assigned at random to the different treatments or some additional condition can be introduced. For instance, in a trial of nicotine patches for smoking cessation, subjects were randomly assigned to different types of patches or to a placebo within each centre participating in the study. In this way, correct comparisons of treatments became possible not only overall on the pooled data from all centres but also within each of the centres located in different countries.

The choice of the study population is critical for generalizing the conclusions drawn from a trial. In the nicotine patches experiment, volunteer subjects were recruited who had made two or more previous unsuccessful attempts and who were going to receive advice by a physician. The conclusion of the trial, that the patches effectively increase the probability of stopping smoking, would not necessary apply to less motivated people. Moreover, treatments that have been shown to be effective and safe in adults may not work in the same way in old people or children. As a general principle, a treatment (e.g. a drug) for a specific disease should be prescribed only in subjects closely similar to those in the trials that have demonstrated its efficacy.

The incessant pressure from the pharmaceutical industry to extend the use of a drug to other diseases should be resisted until there is clear evidence that it works for them as well. To be informative, a trial should include a sufficient number of subjects. What a ‘sufficient' number is can be calculated at the planning stage based on the size of the difference between treatments one wishes to detect with a high degree of confidence. If one is interested in picking up only a very large effect, for example the complete elimination of a disease by a vaccine, a relatively small number will be adequate because if such a major effect exists, it is likely to show up in any event. If at the opposite end one wishes to show that a new vaccine that is cheaper and easier to administer is not (or only minimally) different in efficacy from the best vaccine hitherto available, a very large number of subjects will be required to exclude the possibility that the new vaccine is not inferior, even by a small but still statistically significant amount, to the old one.

The treatment can be as simple as a drug or a vaccine or much more complex, such as an intervention to modify the habitual diet. A recent randomized trial in this field showed that a reduction in the intake of calories could reduce weight in volunteer subjects keeping their amount of physical activity nearly constant. Remarkably, it also showed that the composition of the diet did not matter, whether high or low (within reasonable limits) in fat, protein, or sugars, provided the diet was low in caloric content. Keeping to the diet within the trial implied, however, repeated contact and strict surveillance of the subjects, something that may not be easily reproduced in the population.

Basically, a randomized trial is justified when there is genuine uncertainty about the effect of a treatment in respect to a placebo or to another already established treatment. This uncertainty materializes a condition of `equipoise' between the treatments.

Usually several responses to the treatment or endpoints will be assessed to measure the intended and the possible adverse effects of the treatment. The incidence rate of myocardial infarction may be measured in a trial testing a drug aimed at its prevention but any other anomalous manifestation will also be carefully monitored as it may indicate an adverse effect of the drug. The best device to avoid all conscious and unconscious influences on the observation and recording of the endpoints is to keep both subjects and physicians blind to the treatments administered. This may not always be possible as, for example, when the treatments are diets of different compositions.

The analysis of the data collected during the study is done at the planned end of it. Often, however, some intermediate analyses can be done to monitor what is happening: if early indications of an obvious advantage of one of the treatments emerge, it may become unethical to continue with the other and inferior treatments; or if signs of serious adverse effects show up, it may become necessary to stop the trial. Because of these delicate implications, the intermediate analyses are usually placed in the hands of a trial-monitoring committee independent of the investigators responsible for the study. A particular type of analysis is not infrequently necessary to take into account the fact that a proportion of trial participants will abandon their assigned treatment during the course of the trial. Most likely these dropouts do not occur by chance but because, for instance, some subjects find it too cumbersome to adhere to a diet, or simply dislike it. In these circumstances, a comparison of the effects of different diets on, for example, the incidence rates of diabetes between people who kept to their diet throughout the trial would not reflect the reality. A more realistic analysis, named by intention to treat, will compare the diabetes rates between the groups of people as initially assigned to each diet, regardless of whether some people dropped out in each group. In fact, the net effect of a diet as it may be proposed, if beneficial, to the whole population will be the result from the combination of the effects among those who adhere to it and whatever other effect ensues among those who started it but then switched to other regimes.

Randomized, non-randomized, and spontaneous experiments Randomized trials are a precious tool in medical and epidemiological research. They can be looked at from two slightly different angles. From one viewpoint, they are the instrument to test how effective a treatment is. Before the era of the randomized control trial, heralded by the British trial of streptomycin on pulmonary tuberculosis in 1948, the evidence of the positive and negative effects of a treatment was essentially based on the accumulation of clinical experience supported by knowledge from physiology and pathology.

In epidemiology the evidence of how, for instance, a vaccine worked was based on observational studies. Compared to randomized trials, these methods are more cumbersome, as they require a large accumulation of concordant results from clinical or epidemiological observations before any sound conclusions can be drawn, and less sensitive, because minor but medically important effects - say, a 5-10% reduction in the incidence rate of a disease - cannot be recognized with any confidence. The randomized trial has therefore become the generally accepted standard for testing treatments, preventive or remedial.

From another angle, the randomized trial is the acid test of causality. Removal of a presumed cause of a disease conducted in the form of a randomized trial is the best proof that the exposure is indeed a cause. This test may sometimes be feasible. For example, a vaccination programme against the hepatitis B virus could not be introduced all at once in the whole population of newborns in the Gambia. This unfavourable circumstance was turned into an actual advantage by picking the children to be vaccinated first at random, so allowing a correct comparison with the unvaccinated children born in the same year (by the fourth year, the vaccination reached all newborns in the country). The expected reduction in liver cancer among the vaccinated children once they become adult should provide conclusive evidence that the hepatitis B virus causes not only hepatitis - an established fact - but liver cancer, the most frequent cancer in many countries of Africa and South East Asia. For many exposures clearly indicated to be harmful by observational studies and laboratory experiments, a planned randomized removal of the exposure is neither feasible nor ethical. A surveillance programme should nonetheless be set up to observe the course of disease following the `natural experiment' of removing (in whatever way) the exposure. For instance, a substantial number of the doctors in the prospective study of Richard Doll and Austin Bradford Hill stopped smoking. Already within the first five years after stopping, the incidence rate of lung cancer fell by almost one-third, providing additional and strong evidence supporting the causal role of tobacco smoking.

When overall survival is examined, the experience of British doctors showed that the sooner smoking is stopped the more an (ex)smoker can expect to live as long as a lifelong non-smoker. In plain terms, the best option is to never start smoking, the next best is to stop soon, and even stopping late produces at least some rapid benefit. Well-designed, conducted, and analysed observational and randomized.Studies are two complementary instruments of epidemiology that contribute to advancing knowledge even when they produce contrasting results, as the case of vegetables and cancer shows. Thirty years ago, several observational studies had already indicated that the consumption of vegetables, a source of vitamin A, and blood levels of vitamin A higher than average were associated with a reduced risk of cancer. T

here was some evidence from laboratory experiments showing that vitamin A and its derived compounds in the body could inhibit the transformation and proliferation of normal cells into cancer cells. To test directly the causal hypothesis that vitamin A inhibits cancer, a randomized controlled trial was carried out on more than 8,000 adult smokers (particularly at risk of lung cancer) in Finland comparing a placebo treatment with the administration of beta-carotene, the precursor substance of vitamin A present in yellow vegetables and fruits. The results turned out to be opposite to the hopes. The trial had to be stopped because a surge of lung cancer showed up in the men receiving the beta-carotene. This could mean that beta-carotene given at the doses of the trial, appreciably higher than in a normal diet, had an adverse rather than a beneficial effect. It might also mean that in the previous observational studies, vitamin A was not responsible for the reduced risk of cancer but simply an indicator of other substances present in vegetables and capable of inhibiting cancer development. Even today, the protective role of vegetables appears plausible, though not conclusively demonstrated, while on the other hand, the beta-carotene example gives a clear warning that incautious use of vitamin supplements may result in harmful rather than beneficial health effects.

 

Share

Google+

googleplus sm

Translate

ar bg ca zh-chs zh-cht cs da nl en et fi fr de el ht he hi hu id it ja ko lv lt no pl pt ro ru sk sl es sv th tr uk

Verse of the Day

Global Map