A SHORT HISTORY OF RADIOLOGICAL PROTECTION

Geoff Meggitt

The origins of x-rays and radioactivity are linked: Roentgen discovered x-rays in 1895 and Becquerel stumbled upon radioactivity the following year while investigating Rontgen’s discovery. In different ways they would transform diagnostic and therapeutic medicine. However both of them brought a risk that would remain an enigma for the best part of a century (at least). That of radiation.

Roentgen revealed his discovery in the very last days of 1895 and within a few weeks radiographs had been taken around the world. Within a year there were 50 books and pamphlets on x-rays and nearly 1000 papers were published, tubes had been improved by the addition of anodes, the fluoroscope had been invented and widely used and General Electric had produced a catalogue of ready-made x-ray equipment. Soon enough, hospitals around the world were setting up departments for both diagnostic and therapeutic use of the new technology.

In the first year there were already hints of future problems. Several workers experienced effects that looked like sunburn with loss of skin and hair from affected regions. At first people blamed the high voltage power supplies and the ozone they created, the processing chemicals and almost anything but the magical rays themselves. However by September 1896 the great Lord Lister, could speak of the “aggravated sunburn” and speculate that “ a transmission of the rays through the human body may be not altogether a matter of indifference to internal organs…”

Within just a few years it became clear that prolonged exposure to the rays could have devastating consequences: sunburn turned into painful sores and warts, necrosis and a variety of cancers. The pain was “as if bones were being gnawed away by rats” someone said. Fingers were amputated, then arms. Pioneers began to die.

The warnings were heeded by some who shielded the tubes with lead and abandoned the common practice of putting their hands between the tube and a fluoroscope to check that things were working properly. By about 1910 it was fairly widely recognised that there was a problem that had to be addressed but another decade passed before there was real consensus and general action. The early protection measures were very practical: shield the tube, wear protective equipment, limit exposure by making adjustments from a protected place, limit working hours and encourage staff to spend time outdoors when they could.

A principle emerged: the deadly effects were a direct result of the traumatic tissue damage so if there was no tissue damage there would be no deadly effects. It was therefore thought that, if the dose was well below one which would result in erythema, there would be no long- term effects. This was the “tolerance dose”. In the mid-1920s a popular suggestion for a tolerance dose was about 10% of an erythema dose a year. When expressed in roentgens (r)in the early 1930s, this became 0.2 r per day and fairly quickly this was reduced to 0.1r per day. This value was the one widely used right up to the end of the 1940s.

However by then questions had begun to be asked about the principle itself based on some discoveries in genetics.

Gregor Mendel had established the theoretical existence of the genes in the 1860s with his famous experiments with peas (although the work was forgotten until the beginning of the 20th century). He had no idea where the genes were. However by the time science had rediscovered his work it was fairly clear that the nucleus of the cell played a key role in inheritance. By 1903 it was plausible that Mendel’s genes were located on the chromosomes and by 1916 that was a widely-shared view. This was largely due to the work of TH Morgan and his proteges at Columbia University in New York with the fruit fly Drosophila melanogaster. This minute creature with its modest maintenance requirements and rapid and productive sexual cycle meant that experiments like Mendel’s could be repeated in fraction of the time in a room of fly-filled phials loaded with bananas to sustain the creature. Morgan and his men tracked natural mutations ( first was a white variant of the normal red eye) and, in a long and clever series of experiments, they were able to map the mutated genes responsible onto the fly’s chromosomes the genes became real.

Hermann Muller, one of Morgan’s students, had a particular interest in how the natural mutations arose and developed an elegant technique to measure the rate at which they occurred. His first results showed a temperature dependence so, knowing that chromosomes were visibly damaged by radiation, he set about seeing how the mutation rate was affected by it. He didn’t have to wait long to find an answer: one of his first experiments in 1926 showed that an x-ray radiation dose of 1000r of x-rays to a fly’’s sperm increased the mutation rate 1000-fold.

The artificial mutations behaved just like the natural ones and, disturbingly, their creation rate increased linearly with radiation dose – without any threshold. Muller was a life-long eugenicist (although by most standards a fairly gentle one) and quickly realised the possible implications of this for the human gene pool. He spent the rest of his turbulent life campaigning for recognition of the threat.

The 1940s brought a new and awesome radiological threat with the development of the atom bomb and the terrible carnage it wrought in Japan. People soon realised that, while this had been far away, bombs were being exploded in America itself – as well as the South Pacific and Australia – in atomic tests aimed at creating even more powerful and terrifying weapons. These tests produced fallout (by now everyone knew about fallout) which swept around the world and this inevitably led to radiation exposure and this led, there being no threshold, to damage to the human gene pool.

The threat to mankind’s genes seemed ever greater. Scientists had models to enable them to make assessment of doses to the gonads and this for a while became the dominant parameter in radiation protection – largely because it was agreed that there was no threshold. Public concern about the long-term genetic effects of atomic tests was a reason why they were banned – although, more cynically, there was a sense in which the developers had gathered all the data they needed.

The threshold principle for somatic effects (those that occurred in the exposed individual) held sway through most of the 1940s but it began to be questioned – largely because it was so at variance with the established no-threshold nature of the genetic ones. So it was one of the issues in people’s minds when the Atomic Bomb Casualty Commission was set up in 1947 to study the survivors of Hiroshima and Nagasaki.

Two results of the early studies rather surprised workers: no genuine genetic effects could be found (they never were) but there was a dramatic increase in leukaemia cases and the incidence of these increased the closer you got to ground zero.

The leukaemia occurrence continued to grow but peaked in the early 1950s and then slowly declined reaching background levels in Nagasaki in about 1980.The story with solid cancers was rather different and more alarming. They were slower to appear but the incidence then stayed at a high level for much longer.

The implications of the data could not be fully realised until there were reliable estimates of the doses received by the survivors. The first of these came in 1965 and these were revised in 1986 and then again, but only slightly, in 2004.

The relationship between cancer effects and doses implied by the Japanese casualty data was regularly reviewed by the UN Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). While risk estimates were modified somewhat over the years there has never been any generally-accepted evidence to support a dose threshold for cancer induction – aligning the somatic and genetic effects. In its milestone report of 1977 (ICRP 26), when the International Commission on Radiological Protection first took full account of the available Japanese data, they opted for a no-threshold proportional relationship for both. By then it was pretty clear that the genetic effects were not the dominant effects in most circumstances, it was cancers that were.

ICRP 26 generated a protection scheme based on avoiding radiation exposures altogether unless there was a clear benefit from the activity and then optimising the exposure by increasing the protection until the cost of increasing it further was disproportionate to the dose savings. This was the As Low as Reasonably Achievable principle.

There were many technical follow-ups to the document that clarified, for example, how the risks from exposure of particular organs could be taken into account (something that was important for internal dosimetry). Applications in many fields – including medicine – have been considered in detail in ICRP reports. These reflect changes in technologies such as dosimetry techniques and new data on risks. However, the core precautionary principles have hardly changed over the subsequent four decades.

18 July 2021

 

This article first appeared in the Royal College of Radiologists Newsletter.

--------------------------------------------------------------------------------------

Geoff is Honorary Secretary and a Trustee of the British Society for the History of Radiology. Most of his career was spent with the United Kingdom Atomic Energy Authority and its descendants working on radiological protection. He was editor of the Journal of Radiological Protection for five years in the 1990s. Since retirement he has written two books on radiological themes: Taming the Rays (a history of radiological protection) and Genes, Flies, Bombs and a Better Life (a biography of Hermann Muller).