• How do you make a mode-locked laser?

    Given

    Mode-locked lasers are lasers that are capable of producing intense ultra-short pulses of light at a very high rate.

    Concepts

    Set 1

    Take a bunch of atoms, excite them and place them in a box covered with mirrors in all directions. Send in one photon, a particle of light, to intercept one of these atoms. Unable to get more excited, the atom will get de-excited by emitting the interceptor photon and another photon identical to it. Because the box is covered with mirrors, these two photons bounce off a wall and intercept two more atoms. The same thing happens, over and over. A hole in the box allows the ‘extra’ photons to escape to the outside. This light is what you would see as laser light. Of course it’s a lot more complicated than that but if you had to pare it down to the barest essentials (and simplify it to a ridiculous degree), that’s what you’d get. The excited atoms that are getting de-excited together make up the laser’s gain medium. The mirror-lined box that contains the atoms, and has a specific design and dimensions, is called the optical cavity.

    Set 2

    Remember wave-particle duality? And remember Young’s double-slit experiment? The photons bouncing back and forth inside the optical cavity are also waves bouncing back and forth. When two waves meet, they interfere – either constructively or destructively. When they interfere destructively, they cancel each other out. When they interfere constructively, they produce a larger wave.

    A view of a simulation of a double-slit experiment with electrons (particles). The destructively interfered waves are ‘visible’ as no-waves whereas the constructively interfered waves are visible as taller waves. Credit: Alexandre Gondran/Wikimedia Commons, CC BY-SA 4.0

    As thousands of waves interfere with each other, only the constructively interfered waves survive inside the optical cavity. These waves are called modes. The frequencies of the modes are together called the laser’s gain bandwidth. Physicists can design lasers with predictable modes and gain bandwidth using simple formulae. They just need to tweak the optical cavity’s design and the composition of the gain medium. For example, a laser with a helium-neon gain medium has a gain bandwidth of 1.5 GHz. A laser with a titanium-doped sapphire gain medium has a gain bandwidth of 128,000 GHz.

    Set 3

    Say there are two modes in a laser’s gain medium. Say they’re out of phase. Remember the sine wave? It looks like this: ∿. A wave’s phase denotes the amount of the wave-shape it has completed. The modes are the waves that survive in the laser’s optical cavity. If there are only two modes and they’re out of phase, the laser’s light output is going to be sputtering – very on-and-off. If there are thousands of modes, the output is going to be a lot better: even if they are all out of phase, their sheer number is going to keep the output intensity largely uniform.

    Two sinusoidal waves offset from each other by a phase shift θ. When θ = 0º, the waves will be in phase. Credit: Peppergrower/Wikimedia Commons, CC BY-SA 3.0

    But there’s another scenario in which there are many modes and the modes are all in phase. In this optical cavity, the modes would all constructively interfere with each other and produce a highly amplified wave at periodic intervals. This big wave would appear as a short-duration but intense pulse of light – and the laser producing it would be called a mode-locked laser.

    Like in the previous instance, there are simple formulae to calculate how often a pulse is produced, depending on the optical cavity design and the gain medium’s properties. These formulae also show that the wider the modes’ range of frequencies – i.e. the gain bandwidth – the shorter the duration of the light pulse will be. For example, the helium-neon laser has a lower gain bandwidth, so its lowest pulse duration is around 300 picoseconds. The titanium-doped sapphire laser has a higher gain bandwidth, so its lowest pulse duration is 3.4 femtoseconds. In the former duration, light would have travelled around 9 cm; in the latter, it would have travelled only 1 µm.

    Brief interlude

    • An optical cavity of the sort described above is called a Fabry-Pérot cavity. The LIGO detector used to record and study gravitational waves uses a pair of Fabry-Pérot cavities to increase the distance each beam of laser light travels inside the structure, increasing the facility’s sensitivity to a level required to be affected by gravitational waves.
    • Aside from the concepts described above, ensuring a mode-locked laser works as intended requires physicists to adjust many other parts of the device. For example, they need to control the cavity’s dispersion (if waves of different frequencies propagate differently), the laser’s linewidth (the range of frequencies in the output), the shape of the pulse, and the physical attributes of the optical cavity and the gain medium (their temperature, e.g.).

    Method

    How do you ‘lock’ the modes together? The two most common ways are active and passive locking. Active locking is achieved by placing a material or a device that exhibits the electro-optic effect inside the optical cavity. In such a material, its optical properties change if an electric field is applied. A popular example is the crystal lithium niobate: in the presence of an electric field, its refractive index increases, meaning light takes longer to pass through it. Remember that the farther a light wave propagates, the more its phase evolves. So a wave’s phase can be ‘adjusted’ by passing it through the crystal and then tuning the applied electric field (very simplistically speaking), to get its phase right. What actually happens is more complicated, but by repeatedly modulating the light waves inside the cavity in this manner, the phases of all the waves can be synchronised.

    A lithium niobate wafer. Credit: Smithy71, CC0

    Passive locking dispenses with an external modulator (like the applied electric field); instead, it encourages the light waves to get their phases in sync by repeatedly interacting with a passive object inside the cavity. A common example is a semiconductor saturable absorber, which absorbs light of low intensity and transmits light of high intensity. A related technique is Kerr-lens mode-locking, in which low- and high-intensity waves are focused at different locations inside the cavity and the high intensity waves are allowed to exit. Kerr-lens mode-locking is capable of producing extremely intense pulses of laser light.

    Conclusion

    Thus, we have a mode-locked laser. They have several applications. Two that are relatively easier to explain are nuclear fusion and eye surgery. While ‘nuclear fusion’ describes a singular outcome, there are many ways to get there. One is to heat electrons and ions to a high temperature and confine them using magnetic fields, encouraging them to recombine. This is called magnetic confinement. Another way is to hold a small amount of hydrogen in a very small container (technically, a hohlraum) and then compress it further using ultra-short high-intensity laser pulses. This is the inertial containment method, and it can make use of mode-locked lasers. In refractive eye surgery, doctors use a series of laser pulses, each only a few femtoseconds long, to cut a portion of the cornea during LASIK surgery.

    Addendum

    If your priority is the laser’s intensity over the pulse duration or the repetition rate, you could use an alternative technique called giant pulse formation (a.k.a. Q-switching). The fundamental principle is simple – sort of like holding your farts in and letting out a big one later. When the laser is first being set up, the gain medium is pumped into the optical cavity. Once it is sufficiently full, the laser will start operating. In terms of energy – remember that the atoms making up the gain medium are excited. In the giant pulse formation technique, an attenuator is placed inisde the cavity: this device prevents photons from being reflected around. As a result, the laser can’t operate even when the gain medium is more than dense enough for the laser to operate.

    After a point, the pumping is stopped. Some atoms in the medium might spontaneously emit some energy and become de-excited, but by and large, the optical cavity will contain a (relatively) large amount of energy that also remains stable over time – certainly more energy than if the laser had been allowed to start earlier. Once this steady state is reached, the attenuator is quickly switched to allow photons to move around inside the cavity. Because the laser then begins with a gain medium of higher density, its first light output has very high intensity. The ‘Q’ of ‘Q-switching’ refers to the cavity’s quality factor. On the flip side, in giant pulse formation, the gain medium’s density also drops rapidly, and subsequent pulses are not so intense. This compromises the laser’s repetition rate.

  • The strange NYT article on taming minks

    I’m probably waking up late to this but the New York Times has published yet another article in which it creates a false balance by first focusing on the problematic side of a story for an inordinately long time, without any of the requisite qualifications and arguments, before jumping, in the last few paragraphs to one or two rebuttals that reveal, for the first time, that there could in fact be serious problems with all that came before.

    The first instance was about a study on the links between one’s genes and facial features. The second is a profile of a man named Joseph Carter who tames minks. The article is headlined ‘How ‘the Most Vicious, Horrible Animal Alive’ Became a YouTube Star’. You’d think minks are “vicious” and “horrible” because they’re aggressive or something like that, but no – you discover the real reason all the way down in paragraph #12:

    “Pretty much everyone I asked, they told me the same thing — ‘They’re the most vicious, horrible animal alive,’” Mr. Carter said. “‘They’re completely untamable, untrainable, and it doesn’t really matter what you do.’”

    So, in 2003, he decided that he would start taming mink. He quickly succeeded.

    Putting such descriptors as “vicious” and “horrible” in single-quotes in the headline doesn’t help if those terms are being used – by unnamed persons, to boot – to mean minks are hard to tame. That just makes them normal. But the headline’s choice of words (and subsequently the refusal by the first 82% (by number of paragraphs) of the piece to engage with the issue) gives the impression that the newspaper is going to ignore that. A similar kind of dangerous ridiculousness emerges further down the piece, with no sense of irony:

    “You can’t control, you can’t change the genetics of an individual,” he said. “But you can, with the environment, slightly change their view of life.”

    Why do we need to change minks’ view of anything? Right after, the article segues to a group of researchers at a veterinary college in London, whose story appears to supply the only redeeming feature of Carter’s games with minks: the idea that in order to conduct their experiments with minks, the team would have to design more challenging tasks with higher rewards than they were handing out. Other than this, there’s very little to explain why Carter does what he does.

    There’s a flicker of an insight when a canal operator says Carter helps them trap the “muskrats, rats, raccoons and beavers” that erode the canal’s banks. There’s another flicker when the article says Carter buys “many of his animals” from fur farms, where the animals are killed before they’re a year old when in fact they could live to three, as they do with Carter. Towards the very end, we learn, Carter also prays for his minks every night.

    So he’s saving them in the sort of way the US saves other countries?

    It’s hard to say when he’s also setting out to tame these animals to – as the article seems to suggest – see if he can succeed. In fact, the article is so poorly composed and structured that it’s hard to say if the story it narrates is a faithful reflection of Carter’s sensibilities or if it’s just lazy writing. We never find out if Carter has ever considered ‘rescuing’ these animals and releasing them into the wild or if he has considered joining experts and activists fighting to have the animal farms shut. We only have the vacuous claim that is Carter’s belief that he’s giving them a “new life”.

    The last 18% of the article also contains a few quotes that I’d call weak for not being sharp enough to poke more holes in Carter’s views, at least as the New York Times seems to relay them. There is one paragraph citing a 2001 study about what makes mink stressed and another about the welfare of Carter’s minks being better than those that are caged in the farms. But the authors appear to have expended no sincere effort to link them together vis-à-vis Carter’s activities.

    In fact, there is a quote by a scientist introduced to rationalise Carter’s views: “It’s like any thoroughbred horse, or performance animal — or birds of prey who go out hunting. If asked, they probably would prefer to hunt.” Wouldn’t you think that if they were asked, and if they could provide an answer that we could understand, they would much rather be free of all constraints rather than being part of Carter’s circus?

    There is also a dubious presumption here that creates a false choice – between being caged in a farm and being tamed by a human: that the minks ought to be grateful because some humans are choosing to stress them less, instead of not stress them whatsoever. Whether a mink might be killed by predators or have a harder time finding food in the wild, if it is released, is completely irrelevant.

    Then comes the most infuriating sentence of the lot, following the scientist’s quote: “Mr. Carter has his own theories.” Not ideas or beliefs but theories. Because scientists’ theories, tested as they need to be against our existing body of knowledge and with experiments designed to eliminate even inadvertent bias, are at least semantically equivalent to Carter’s “own theories”, founded on his individual need for self-justification and harmony with the saviour complex.

    And then the last paragraph:

    “Animals don’t have ethics,” he said. “They have sensation, they can feel pain, they have the ability to learn, but they don’t have ethics. That’s a human thing.”

    I don’t know how to make sense of it, other than with the suspicion that the authors and/or editors grafted these lines to the bottom because they sounded profound.

  • Why everyone should pay attention to Stable Diffusion

    Many of the people in my circles hadn’t heard of Stable Diffusion until I told them, and I was already two days late. Heralds of new technologies have a tendency to play up every new thing, however incremental, as the dawn of a new revolution – but in this case, their cries of wolf may be real for once.

    Stable Diffusion is an AI tool produced by Stability.ai with help from researchers at the Ludwig Maximilian University of Munich and the Large-scale AI Open Network (LAION). It accepts text or image prompts and converts them into artwork based on, but not necessarily understand, what it ‘sees’ in the input. It created the image below with my prompt “desk in the middle of the ocean vaporwave”. You can create your own here.

    But it strayed into gross territory with a different prompt: “beautiful person floating through a colourful nebula”.

    Stable Diffusion is like OpenAI’s DALL-E 1/2 and Google’s Imagen and Parti but with two crucial differences: it’s capable of image-to-image (img2img) generation as well and it’s open source.

    The img2img feature is particularly mind-blowing because it allows users to describe the scene using text and then guide the Stable Diffusion AI by using a little bit of their own art. Even a drawing on MS Paint with a few colours will do. And while OpenAI and Google hold their cards very close to their chests, with the latter even refusing to release Imagen or Parti in private betas, Stability.ai has – in keeping with its vision to democratise AI – opened Stable Diffusion for tinkering and augmentation by developers en masse. Even the ways in which Stable Diffusion has been released are important: trained developers can work directly with the code while untrained users can access the model in their browsers, without any code, and start producing images. In fact, you can download and run the underlying model on your system, requiring some slightly higher-end specs. Users have already created ways to plug it into photo-editing software like Photoshop.

    Stable Diffusion uses a diffusion model: a filter (essentially an algorithm) that takes noisy data and progressively de-noises it. In incredibly simple terms, researchers take an image and in a step-wise process add more and more noise to it. Next they feed this noisy image to the filter, which then removes the noise from the image in a similar step-wise process. You can think of the image as a signal, like the images you see on your TV, which receives broadcast signals from a transmitter located somewhere else. These broadcast signals are basically bundles of electromagnetic waves with information encoded into the waves’ properties, like their frequency, amplitude and phase. Sometimes the visuals aren’t clear because some other undesirable signal has become mixed up with the broadcast signal, leading to grainy images on your TV screen. This undesirable information is called noise.

    When the noise waveform resembles that of a bell curve, a.k.a. a Gaussian function, it’s called Gaussian noise. Now, if we know the manner in which noise has been added to the image in each step, we can figure out what the filter needs to do to de-noise the image. Every Gaussian function can be characterised by two parameters, the mean and the variance. Put another way, you can generate different bell-curve-shaped signals by changing the mean and the variance in each case. So the filter effectively only needs to figure out what the mean and the variance in the noise of the input image are, and once it does, it can start de-noising. That is, Stable Diffusion is (partly) the filter here. The input you provide is the noisy image. Its output is the de-noised image. So when you supply a text prompt and/or an accompanying ‘seed’ image, Stable Diffusion just shows off how well it has learnt to de-noise your inputs.

    Obviously, when millions of people use Stable Diffusion, the filter is going to be confronted with too many mean-variance combinations for it to be able to directly predict them. This is where an artificial neural network (ANN) helps. ANNs are data-processing systems set up to mimic the way neurons work in our brain, by combining different pieces of information and manipulating them according to their knowledge of older information. The team that built Stable Diffusion trained its model on 5.8 billion image-text pairs found around the internet. An ANN is then programmed to learn from this dataset as to how texts and images correlate as well as how images and images correlate.

    To keep this exercise from getting out of hand, each image and text input is broken down into certain components, and the machine is instructed to learn correlations only between these components. Further, the researchers used an ANN model called an autoencoder. Here, the ANN encodes the input in its own representation, using only the information that it has been taught to consider important. This intermediate is called the bottleneck layer. The network then decodes only the information present in this layer to produce the de-noised output. This way, the network also learns what about the input is most important. Finally, researchers also guide the ANN by attaching weights to different pieces of information: that is, the system is informed that some pieces are to be emphasised more than others, so that it acquires a ‘sense’ of less and more desirable.

    By snacking on all those text-image pairs, the ANN effectively acquires its own basis to decide when it’s presented a new bit of text and/or image what the mean and variance might be. Combine this with the filter and you get Stable Diffusion. (I should point out again that this is a very simple explanation and that parts of it may well be simplistic.)

    Stable Diffusion also comes with an NSFW filter built-in, a component called Safety Classifier, which will stop the model from producing an output that it deems harmful in some way. Will it suffice? Probably not, given the ingenuity of trolls, goblins and other bad-faith actors on the internet. More importantly, it can be turned off, meaning Stable Diffusion can be run without the Safety Classifier to produce deepfakes that are various degrees of disturbing.

    Recommended here: Deepfakes for all: Uncensored AI art model prompts ethics questions.

    But the problems with Stable Diffusion don’t lie only in the future, immediate or otherwise. As I mentioned earlier, to create the model, Stability.ai & co. fed their machine 5.8 billion text-image pairs scraped from the internet – without the consent of the people who created those texts and images. Because Stability.ai released Stable Diffusion in toto into the public domain, it has been experimented with by tens of thousands of people, at least, and developers have plugged it into a rapidly growing number of applications. This is to say that even if Stability.ai is forced to pull the software because it didn’t have the license to those text-image pairs, the cat is out of the bag. There’s no going back. A blog post by LAION only says that the pairs were publicly available and that models built on the dataset should thus be restricted to research. Do you think the creeps on 4chan care? Worse yet, the jobs of the very people who created those text-image pairs are now threatened by Stable Diffusion, which can – with some practice to get your prompts right – produce exactly what you need, no illustrator or photographer required.

    Recommended here: Stable Diffusion is a really big deal.

    The third interesting thing about Stable Diffusion, after its img2img feature + “deepfakes for all” promise and the questionable legality of its input data, is the license under which Stability.ai has released it. AI analyst Alberto Romero wrote that “a state-of-the-art AI model” like Stable Diffusion “available for everyone through a safety-centric open-source license is unheard of”. This is the CreativeML Open RAIL-M license. Its preamble says, “We believe in the intersection between open and responsible AI development; thus, this License aims to strike a balance between both in order to enable responsible open-science in the field of AI.” Attachment A of the license spells out the restrictions – that is, what you can’t do if you agree to use Stable Diffusion according to the terms of the license (quoted verbatim):

    “You agree not to use the Model or Derivatives of the Model:

    • In any way that violates any applicable national, federal, state, local or international law or regulation;
    • For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
    • To generate or disseminate verifiably false information and/or content with the purpose of harming others;
    • To generate or disseminate personal identifiable information that can be used to harm an individual;
    • To defame, disparage or otherwise harass others;
    • For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
    • For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
    • To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
    • For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
    • To provide medical advice and medical results interpretation;
    • To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).”

    As a result of these restrictions, law enforcement around the world has incurred a heavy burden, and I don’t think Stability.ai took the corresponding stakeholders into confidence before releasing Stable Diffusion. It should also go without saying that the license choosing to colour within the lines of the laws of respective countries means, say, a country that doesn’t recognise X as a crime will also fail to recognise harm in the harrassment of victims of X – now with the help of Stable Diffusion. And the vast majority of these victims are women and children, already disempowered by economic, social and political inequities. Is Stability.ai going to deal with these people and their problems? I think not. But as I said, the cat’s already out of the bag.

  • When a teenager wants to solve poaching with machine-learning…

    We always need more feel-good stories, but we need those feel-good stories more that withstand closer scrutiny instead of falling apart, and framed the right way.

    For example, Smithsonian magazine published an article with the headline ‘This Teenager Invented a Low-Cost Tool to Spot Elephant Poachers in Real Time’ on August 4. It’s a straightforward feel-good story at first glance: Anika Puri is a 17-year-old in New York who created a machine-learning model (based on an existing dataset) “that analyses the movement patterns of humans and elephants”. The visual input for the model comes from a $250 thermal camera attached to an iPhone attached to a drone, which flies over problem areas and collects data, and which the model then sifts through to pick out the presence of humans. One caveat: the machine-learning model can detect people, not poachers.

    Nonetheless, this is clearly laudable work by a 17-year-old – but the article is an affront to people working in India because it plainly overlooks everything that makes elephant poaching tenacious enough to have caught Puri’s attention in the first place. A 17-year-old did this and we should celebrate her, you say, and that’s fair. But we can do that without making what she did sound like a bigger deal than it is, which would also provide a better sense of how much work she has left to do, while expressing our belief – this is important – that we look forward to her and others like her applying their minds to really doing something about the problem. This way, we may also be able to salvage two victims of the Smithsonian article.

    The first is why elephant poaching persists. The article gives the impression that it does for want of a way to tell when humans walk among elephants in the wild. The first red-flag in the article, to me at least, is related to this issue and turns up in the opening itself:

    When Anika Puri visited India with her family four years ago, she was surprised to come across a market in Bombay filled with rows of ivory jewelry and statues. Globally, ivory trade has been illegal for more than 30 years, and elephant hunting has been prohibited in India since the 1970s. “I was quite taken aback,” the 17-year-old from Chappaqua, New York, recalls. “Because I always thought, ‘well, poaching is illegal, how come it really is still such a big issue?’”

    I admit I take a cynical view of people who remain ignorant in this day and age of the bigger problems assailing the major realms of human enterprise – but a 17-year-old being surprised by the availability of ivory ornaments in India is pushing it, and more so by being surprised that there’s a difference between the existence of a law and its proper enforcement. Smithsonian also presents Puri’s view as an outsider, which she is in more than the geographical sense, followed by her resolving to do something about it from the outside. That was the bigger issue and a clear sign of the narrative to come.

    Poaching and animal-product smuggling persist in India, among other countries, sensu lato because of a lack of money, a lack of personnel, misplaced priorities and malgovernance and incompetence. The first and the third reasons are related: the Indian government’s conception of how the country’s forests ought to be protected regularly exclude the welfare of the people living in and dependent on those forests, and thus socially and financially alienates them. As a result, some of those affected see a strong incentive in animal poaching and smuggling. (There are famous exceptions to this trend, like the black-necked crane of Arunachal Pradesh, the law kyntang forests of Meghalaya or the whale sharks off Gujarat but they’re almost always rooted in spiritual beliefs – something the IUCN wants to press to the cause of conservation.)

    Similarly, forest rangers are underpaid, overworked, use dysfunctional or outdated equipment and, importantly, are often caught between angry locals and an insensitive local government. In India, theirs is a dispriting vocation. In this context the use of drones plus infrared cameras that each cost Rs 20,000 is laughable.

    The ‘lack of personnel’ is a two-part issue: it helps the cause of animal conservation if the personnel include members of local communities, but they seldom do; second, India is a very large country, so we need more rangers (and more drones!) to patrol all areas, without any blind spots. Anika Puri’s solution has nothing on any of these problems – and I don’t blame her. I blame the Smithsonian for its lazy framing of the story, and in fact for telling us nothing of whether she’s already aware of these issues.

    The second problem with the framing has to do with ‘encouraging a smart person to do more’ on the one hand and the type of solution being offered to a problem on the other. This one really gets my goat. When Smithsonian played up Puri’s accomplishment, such as it is, it effectively championed techno-optimism: the belief that technology is a moral good and that technological solutions can solve our principal crises (crises that techno-optimists like to play up so that they seem more pressing, and thus more in need of the sort of fixes that machine-centric governance can provide). In the course of this narrative, however, the sociological and political solutions that poaching desperately requires fall by the wayside, even as the trajectories of the tech and its developer are celebrated as a feel-good story.

    In this way, the Smithsonian article has effectively created a false achievement, a red herring that showcases its subject’s technical acumen instead of a meaningful development towards solving poaching. On the other hand, how often do you read profiles of people, young or old, whose insights have been concerned less with ‘hardware’ solutions (technological innovation, infrastructure, etc.) and more with improving and implementing the ‘software’ – that is, changing people’s behaviour, deliberating on society’s aspirations and effecting good governance? How often do you also encounter grants and contests of the sort that Puri won with her idea but which are dedicated to the ‘software’ issues?

  • The search for a powerful natural particle accelerator

    Earth is almost constantly beset by a stream of particles from space called cosmic rays. These particles consist of protons, bundles of two protons and two neutrons each (alpha particles), a small number of heavier atomic nuclei and a smaller fraction of anti-electrons and anti-protons. Cosmic rays often have high energy – typically up to half of 1 GeV. One GeV is almost the amount of energy that a single proton has at rest. The Large Hadron Collider (LHC) itself can accelerate protons up to 7,000 GeV.

    But this doesn’t mean cosmic rays are feeble: historically, some detectors have recorded high-energy and very-high-energy cosmic rays. The most energetic cosmic ray – dubbed the “oh my god” particle – was a proton recorded over Utah in 1991 with an energy of around 3 x 1012 GeV, which is around three-billion-times higher than the energy to which the LHC can accelerate protons today. This proton was travelling at 99.9% the speed of light in vacuum. This is a phenomenal amount of energy – about as much kinetic energy as a baseball moving at 95 km/hr but concentrated into the volume of a proton, which has 1042-times less space in which to hold that energy.

    Detectors have also spotted some cosmic-ray events with energies exceeding 1,000,000 GeV – or 1 PeV. They’re uncommon compared to all cosmic-ray events but relatively more common than the likes of the “oh my god” particle. Physicists are interested in them because they indicate the presence of a natural particle accelerator somewhere in the universe that’s injecting protons with ginormous amounts of energy and sending them blasting off into space. One term for such natural accelerators seemingly capable of accelerating protons to 0.1-1 PeV is ‘PeVatron’. And the question is: where can we find a PeVatron?

    There are three broad sources of cosmic rays: from the Sun, from somewhere in the Milky Way galaxy and from somewhere beyond the galaxy. Most of the cosmic rays we have detected have been from the latter two sources. In fact, there’s a curious feature called the ‘knee’ that physicists believe could distinguish between these sources. If you plot the number of cosmic rays on the y-axis and the energies of the cosmic rays on the x-axis, you’ll find yourself looking at the famous Swordy plot:

    The Swordy plot of cosmic-rays flux versus energy. The yellow zone accounts for solar cosmic rays, the blue zone for galactic cosmic rays and the pink zone for extragalactic cosmic rays. Credit: Sven Lafebre/Wikimedia Commons, CC BY-SA 3.0

    As you can see, the plot shows a peculiar bump, an almost imperceptible change in slope, when transitioning from the blue to the pink zones – this is the ‘knee’. Physicists have interpreted the cosmic rays above the knee to be from within the Milky Way and those below to be from outside the galaxy, although why this is so isn’t clear.

    Before cosmic rays interact with other particles in their way, they’re called primary cosmic rays. After their interaction, such as the atoms and molecules in Earth’s upper atmosphere, they produce a shower of secondary particles; these are the secondary cosmic rays. Physicists can get a tighter fit on the potential source of primary cosmic rays by analysing the direction at which they strike the atmosphere, the composition of the secondary cosmic rays, and the energies of both the primary and the secondary rays. This is why we suspect supernovae are one source of within-the-galaxy cosmic rays, with some possible mechanisms of action.

    One, for example, is shockfront acceleration: a proton could get trapped between two shockwaves from the same supernova. As the outer wave slows and the inner wave charges in, the proton could bounce rapidly between the two shockfronts and emerge greatly energised out of a gap. However, we don’t know what fraction of cosmic rays, at different energies, supernovae can account for.

    Potential extragalactic sources include active galactic nuclei – the centres of galaxies, including the neighbourhood of supermassive black holes – and the extremely powerful gamma-ray bursts. Physicists have associated them with cataclysmic events like neutron-star mergers and the formative events of black-holes.

    However, exercises to triangulate the sources of high-energy cosmic rays are complicated by galactic magnetic fields (which curve the paths of charged particles). A proton accelerated by the shockfront mechanism could also bump into some other particle as it emerges, producing a flash of gamma rays that physicists can look for – but only if they have a way to isolate it from other sources of gamma rays in a supernova’s vicinity. This is difficult work.

    Researchers from the US recently analysed gamma-ray data collected by the Fermi Gamma-ray Space Telescope (FGST), in low-Earth orbit, of the supernova remnant G106.3+2.7. Astrophysicists have suspected that this object could be a PeVatron for more than a decade, and the US research team used FGST data to check if they the suspicion could be true. The difficult bit? The data spanned 12 years.

    In 2008, physicists recorded very high energy (100-100,000 GeV) gamma rays from G106.3+2.7, located around 800 parsec (2,600 lightyears) away. The US research team figured that they could have been produced in two ways. Let’s call them Mechanism A and Mechanism B. Physicists already know Mechanism A is associated with cosmic rays while Mechanism B is not. The US team members used 12 years of data to characterise gamma-ray, X-ray and radio emissions around the remnant so they could determine which mechanism could have been responsible for all of them the way they have been observed, with the gamma rays as secondary cosmic rays.

    The team’s analysis found that the theory of Mechanism A almost exactly accounted for the energies of the gamma rays from the remnant while also accommodating the other radiation – whereas the theory of Mechanism B couldn’t explain the gamma rays and the remnant’s X-ray emissions together. In effect, the team had a way to justify the idea that G106.3+2.7 could be a PeVatron.

    Mechanism B is inverse Compton scattering by relativistic electrons. Inverse Compton scattering is when high-energy electrons collide with low-energy photons and the photons gain energy (in regular Compton scattering, the electrons gain energy). When this model couldn’t account for the gamma-ray emissions, the team invoked a modified version involving two sets of electrons, with each set accelerated to different energies by different mechanisms. But the team found that the FGST data continued to disfavour the involvement of leptons, and instead preferred the involvement of hadrons. Leptons – like electrons – are particles that don’t interact with other particles through the strong nuclear force. Hadrons, on the other hand, do, and they were implicated in Mechanism A: the decay of neutral pions.

    Pions are the lightest known hadrons and come in three types: π+, π0 and π. Neutral pions are π0. They have a very short lifetime, around 85 attoseconds – that’s 0.000000000000000085 seconds. And when they decay, they decay into gamma rays, i.e. high-energy photons.

    Some 380,000 years after the Big Bang, a series of events in the universe left behind some radiation that survives to this day. This relic radiation is called the cosmic microwave background, a sea of photons in the microwave frequency pervading the cosmos. When a cosmic-ray proton collides with one of these photons, a delta-plus baryon is formed that then decays into a proton and a neutral pion. The neutral pion then decays to gamma rays, which are detectable as secondary cosmic rays.

    Source: Wikipedia/’Greisen–Zatsepin–Kuzmin limit’

    Knowing the energy of the gamma rays allows physicists to work back to the energy of the cosmic ray. And according to the team’s calculations, the 2009 gamma-ray emission indicates G106.3+2.7 could be a PeVatron. As the team’s preprint paper concluded,

    “… only a handful, out of hundreds of radio-emitting supernova remnants, have been observed to emit very high energy radiation with a hard spectrum. The scarcity of PeVatron candidates and the rareness of remnants with very high energy emission make … G106.3+2.7 a unique source. Our study provides strong evidence for proton acceleration in this nearby remnant, and by extension, supports a potential role for G106.3+2.7-like supernova remnants in meeting the challenge of accounting for the observed cosmic-ray knee using galactic sources”.

    Featured image: An artist’s impression of supernova 1993J. Credit: NASA, ESA and G. Bacon (STScI).

  • Yes, scientific journals should publish political rebuttals

    (The headline is partly click-bait, as I admit below, because some context is required.) From ‘Should scientific journals publish political debunkings?’Science Fictions by Stuart Ritchie, August 27, 2022:

    Earlier this week, the “news and analysis” section of the journal Science … published … a point-by-point rebuttal of a monologue a few days earlier from the Fox News show Tucker Carlson Tonight, where the eponymous host excoriated Dr. Anthony Fauci, of “seen everywhere during the pandemic” fame. … The Science piece noted that “[a]lmost everything Tucker Carlson said… was misleading or false”. That’s completely correct – so why did I have misgivings about the Science piece? It’s the kind of thing you see all the time on dedicated political fact-checking sites – but I’d never before seen it in a scientific journal. … I feel very conflicted on whether this is a sensible idea. And, instead of actually taking some time to think it through and work out a solid position, in true hand-wringing style I’m going to write down both sides of the argument in the form of a dialogue – with myself.

    There’s one particular exchange between Ritchie and himself in his piece that threw me off the entire point of the article:

    [Ritchie-in-favour-of-Science-doing-this]: Just a second. This wasn’t published in the peer-reviewed section of Science! This isn’t a refereed paper – it’s in the “News and Analysis” section. Wouldn’t you expect an “Analysis” article to, like, analyse things? Including statements made on Fox News?

    [Ritchie-opposed-to-Science-doing-this]: To be honest, sometimes I wonder why scientific journals have a “News and Analysis” section at all – or, I wonder if it’s healthy in the long run. In any case, clearly there’s a big “halo” effect from the peer-reviewed part: people take the News and Analysis more seriously because it’s attached to the very esteemed journal. People are sharing it on social media because it’s “the journal Science debunking Tucker Carlson” – way fewer people would care if it was just published on some random news site. I don’t think you can have it both ways by saying it’s actually nothing to do with Science the peer-reviewed journal.

    [Ritchie-in-favour]: I was just saying they were separate, rather than entirely unrelated, but fair enough.

    Excuse me but not at all fair enough! The essential problem is the tie-ins between what a journal does, why it does them and what impressions they uphold in society.

    First, Science‘s ‘news and analysis’ section isn’t distinguished by its association with the peer-reviewed portion of the journal but by its own reportage and analyses, intended for scientists and non-scientists alike. (Mea culpa: the headline of this post answers the question in the headline of Ritchie’s post, while being clear in the body that there’s a clear distinction between the journal and its ‘news and analysis’ section.) A very recent example was Charles Piller’s investigative report that uncovered evidence of image manipulation in a paper that had an outsized influence on the direction of Alzheimer’s research since it was published in 2006. When Ritchie writes that the peer-reviewed journal and the ‘news and analysis’ section are separate, he’s right – but when he suggests that the former’s prestige is responsible for the latter’s popularity, he’s couldn’t be more wrong.

    Ritchie is a scientist and his position may reflect that of many other scientists. I recommend that he and others who agree with him consider the section from the PoV of a science journalist, when they will immediately see as we do that it has broken many agenda-setting stories as well as has published several accomplished journalists and scientists (Derek Lowe’s column being a good example). Another impression that could change with the change of perspective is the relevance of peer-review itself, and the deceptively deleterious nature of an associated concept he repeatedly invokes, which could as well be the pseudo-problem at the heart of Ritchie’s dilemma: prestige. To quote from a blog post in which University of Regensburg neurogeneticist Björn Brembs analysed the novelty of results published by so-called ‘prestigious’ journals, and published in February this year:

    Taken together, despite the best efforts of the professional editors and best reviewers the planet has to offer, the input material that prestigious journals have to deal with appears to be the dominant factor for any ‘novelty’ signal in the stream of publications coming from these journals. Looking at all articles, the effect of all this expensive editorial and reviewer work amounts to probably not much more than a slightly biased random selection, dominated largely by the input and to probably only a very small degree by the filter properties. In this perspective, editors and reviewers appear helplessly overtaxed, being tasked with a job that is humanly impossible to perform correctly in the antiquated way it is organized now.

    In sum:

    Evidence suggests that the prestige signal in our current journals is noisy, expensive and flags unreliable science. There is a lack of evidence that the supposed filter function of prestigious journals is not just a biased random selection of already self-selected input material. As such, massive improvement along several variables can be expected from a more modern implementation of the prestige signal.

    Take the ‘prestige’ away and one part of Ritchie’s dilemma – the journal Science‘s claim to being an “impartial authority” that stands at risk of being diluted by its ‘news and analysis’ section’s engagement with “grubby political debates” – evaporates. Journals, especially glamour journals like Science, haven’t historically been authorities on ‘good’ science, such as it is, but have served to obfuscate the fact that only scientists can be. But more broadly, the ‘news and analysis’ business has its own expensive economics, and publishers of scientific journals that can afford to set up such platforms should consider doing so, in my view, with a degree and type of separation between these businesses according to their mileage. The simple reasons are:

    1. Reject the false balance: there’s no sensible way publishing a pro-democracy article (calling out cynical and potentially life-threatening untruths) could affect the journal’s ‘prestige’, however it may be defined. But if it does, would the journal be wary of a pro-Republican (and effectively anti-democratic) scientist refusing to publish on its pages? If so, why? The two-part answer is straightforward: because many other scientists as well as journal editors are still concerned with the titles that publish papers instead of the papers themselves, and because of the fundamental incentives of academic publishing – to publish the work of prestigious scientists and sensational work, as opposed to good work per se. In this sense, the knock-back is entirely acceptable in the hopes that it could dismantle the fixation on which journal publishes which paper.

    2. Scientific journals already have access to expertise in various fields of study, as well as an incentive to participate in the creation of a sensible culture of science appreciation and criticism.

    Featured image: Tucker Carlson at an event in West Palm Beach, Florida, December 19, 2020. Credit: Gage Skidmore/Wikimedia Commons, CC BY-SA 2.0.

  • In search of sandastros

    About a week ago, I wrote to ICANN asking for a list of all the .com domains that were still available. After I received the file a few days later, I used two pieces of code to extract all the single-word entries on the list and subsequently all the words that were listed in a dictionary. The idea and the instructions came from Derek Sivers. Finally, I randomly picked a letter – ‘s’, it turned out – and began googling all those words whose meanings I didn’t know. I like doing this because sometimes new words can tell you what to think or to write, instead of the convention of what you write determining what words you wield. That’s how I discovered ‘sandastros’.

    When I googled it, I found that the word comes from chapter 28, book 37 of Natural History, an ancient encyclopaedia put together in the first century AD by the Roman philosopher Pliny the Elder. Its contents (as translated by John Bostock) are available to read, book- and chapter-wise, here; the text is also available on a Creative Commons Attribution Share-Alike license. It’s a fascinating text throughout, including book 37, which is dedicated to what was known about precious stones in Pliny’s time. To quote at length from chapter 28:

    Of a kindred nature, too, is sandastros,1 known as “garamantites” by some: it is found in India, at a place of that name, and is a product also of the southern parts of Arabia. The great recommendation of it is, that it has all the appearance of fire placed behind a transparent substance, it burning with star-like scintillations within, that resemble drops of gold, and 2 are always to be seen in the body of the stone, and never upon the surface. There are certain religious associations, too, connected with this stone, in consequence of the affinity which it is supposed to bear with the stars; these scintillations being mostly, in number and arrangement, like the constellations of the Pleiades and Hyades; a circumstance which had led to the use of it by the Chaldæi in the ceremonials which they practise.

    Here, too, the male stones are distinguished from the female, by their comparative depth of colour and the vigorousness of the tints which they impart to objects near them: indeed the stones of India, it is said, quite dim the sight by their brilliancy. The flame of the female sandastros is of a more softened nature, and may be pronounced to be lustrous rather than brilliant. Some prefer the stone of Arabia to that of India, and say that this last bears a considerable resemblance to a smoke-coloured chrysolithos. Ismenias asserts that sandastros, in consequence of its extreme softness, will not admit of being polished, a circumstance which makes it sell all3 the dearer: other writers, again, call these stones “sandrisitæ.” One point upon which all the authorities are agreed is, that the greater the number of stars upon the stone, the more costly it is in price.

    The similarity of the name has sometimes caused this stone to be confounded with that known as “sandaresos,” and which Nicander calls “sandaserion,” and others “sandaseron.” Some, again, call this last-mentioned stone “sandastros,” and the former one “sandaresos.” The stone4 that is thus mentioned by Nicander, is a native of India as well as the other, and likewise takes its name from the locality where it is found. The colour of it is that of an apple, or of green oil, and no one sets any value on it.

    1 “Sandaresus” and “Sandasiros” are other readings. This stone has not been identified, but Ajasson is inclined to think that it may have been Aventurine quartz, and is the more inclined to this opinion, as that mineral is found in Persia, and sandastra or tchandastra is purely a Sanscrit word. The description, however, would hardly seem to apply to Aventurine.

    2 Dalechamps thinks that this is the same as the “anthracites” mentioned in B. xxxvi. c. 38, and identifies it either with our Anthracite, or else with pit-coal or bituminous coal. It is much more likely, however, that a precious stone is meant; and, in conformity with this opinion, Brotero and Ajasson have identified it with the Spinelle or scarlet Ruby, and the Balas or rose-red ruby, magnesiates of alumina.

    3 Littré suggests that the reading here probably might be “ob id non magno”—” sell not so dear.”

    4 It has not been identified.

    Such a fascinating stone. The “garamantites” is a reference to the Garamante people of the second century AD in the Sahara, according to another source, Nicholas Lemery’s Complete Materials Lexicon from 1721. Further searching for ‘sandastros’ led me to an essay published in June 1953 by a D.J. Greene, entitled ‘Smart, Berkeley, the Scientists and the Poets: A Note on Eighteenth-Century Anti- Newtonianism’. It doesn’t explain what sandastros is or where it originated, but it’s worth reading in full for its distinct premise:

    In spite of all that has been written in recent years about the effect on poets and poetry of the modern development of natural science, the conscientious student may be forgiven for wondering whether the historical evidence adduced in these discussions is yet adequate and unambiguous, and for feeling that much more research needs to be done into the history of the relations between science and poetry before any valid generalizations can be made. This article is intended to be a small contribution to that history.

    One line from the essay that I liked in particular: “John Livingston Lowes proved long ago, in his study of The Ancient Mariner, that poetry, however romantic, is not spun solely out of the bowels of poets.”

    Anyway, according to the fifth chapter of a compilation by a George Rapp of the University of Minnesota, entitled ‘Gemstones, Seal Stones, and Ceremonial Stones’ and published in 2009, sandastros is aventurine, a green-hued form of quartz (and matching the “green oil” colour of sandastros). Rapp doesn’t mention this but aventurine lends its name to aventurescence, a phenomenon referring to a peculiar reflection of light, resembling “metallic glitter”, within the material owing to some mineral structures. (Another mineral that exhibits aventurescence is sunstone, a form of plagioclase feldspar found in small parts of Europe, Australia and the US.)

    Aventurine. Credit: Simon Eugster/Wikimedia Commons, CC BY-SA 3.0

    However, recall that ref. 1 to the text by Pliny the Elder clarifies that his description doesn’t match that of aventurine, presumably referring to the “appearance of fire placed behind a transparent substance” and the “drops of gold” that are “always to be seen in the body of the stone, and never upon the surface”. There appears to be some dispute here and which I plan to follow through later – but it remains that sandastros is, as I said, an utterly fascinating thing.

    Featured image credit: Holly Chisholm/Unsplash.

  • What makes ‘good science journalism’?

    From ‘Your Doppelgänger Is Out There and You Probably Share DNA With Them’, The New York Times, August 23, 2022:

    Dr. Esteller also suggested that there could be links between facial features and behavioral patterns, and that the study’s findings might one day aid forensic science by providing a glimpse of the faces of criminal suspects known only from DNA samples. However, Daphne Martschenko, a postdoctoral researcher at the Stanford Center for Biomedical Ethics who was not involved with the study, urged caution in applying its findings to forensics.

    There are two big problems here: 1) Esteller’s comment is at the doorstep of eugenics, and 2) the reporter creates a false balance by reporting both Esteller’s comment and Martschenko’s rebuttal to that comment, when in fact the right course of action would’ve been to drop this portion entirely, as well as take a closer look at why Esteller et al. conducted the study in the first place and whether the study paper and other work at the Esteller lab is suspect.

    This said, it’s a bit gratifying (in a bad way) when a high-stature foreign news publication like The New York Times makes a dangerous mistake in a science-related story. Millions of people are misinformed, which sucks, but when independent scientists and other readers publicly address these mistakes, their call-outs create an opportunity for people (though not as many as are misinformed) to understand exactly what is wrong and, more importantly from the PoV of readers in India, that The New York Times also makes mistakes, that it isn’t a standard-bearer of good science journalism and that being good is a constant and diverse process.

    1) “NYT also makes mistakes” is important to know if only to dispel the popular and frustrating perception that “all American news outlets are individually better than all Indian news outlets”. I had to wade through a considerable amount of this when I started at The Hindu a decade ago – at the hands of most readers as well as some colleagues. I still face this in a persistent way in the form of people who believe some article in The Atlantic is much better than an article on the same topic in, say, The Wire Science, for few, if any, reasons beyond the quality of the language. But of course this will always set The Atlantic and The Wire Science and its peers in India apart: English isn’t the first language for many of us – yet it seldom gets in the way of good storytelling. In fact, I’ve often noticed American publications in particular to be prone to oversimplification more often than their counterparts in Europe or, for that matter, in India. In my considered (but also limited) view, the appreciation of science stories is also a skill, and the population that aspires to harbour it in my country is often prone to the Dunning-Kruger effect.

    2) “NYT isn’t a standard-bearer of good science journalism” is useful to know because of the less-than-straightforward manner in which publications acquire a reputation for “good science journalism”. Specifically, publications aren’t equally good at covering all branches of scientific study; some are better in some fields and others are at some others. Getting your facts right, speaking to all the relevant stakeholders and using sensitive language will get you 90% of the way, but you can tell the differences between publications by how well they cover the remaining 10%, which comes from beat knowledge, expertise and having the right editors.

    3) “Being good is a constant and diverse process” – ‘diverse’ because of the previous point and ‘constant’ because, well, that’s how it is. It’s not that our previous work doesn’t keep us in good standing but that we shouldn’t overestimate how much that standing counts for. This is especially so in this age of short attention spans, short-lived memories and the subtle but pervasive encouragement to be hurtful towards others on the internet. “Good science journalism” is a tag we need to get by getting every single story right – and in this sense, you, the reader, are better off not doling out lifetime awards to outlets. Instead, understand that no outlet is going to be uniformly excellent at all times and evaluate each story on its own merits. This way, you’ll also create an opportunity for Indian news outlets to be free of the tyranny of unrealistic expectations and even surprise you now and then with excellence of our own.

    Finally, none of this is to say that these mistakes happen. They shouldn’t and they’re entirely preventable. Instead, it’s a reminder to keep your eyes peeled at all times and not just when you’re reading an article produced by an Indian outlet.

  • On the record about a source of irritation

    I need to go on the record about a source of mild irritation that seems to resurface in periodic fashion: the recent Current Affairs article about the “dangerous populist science of Yuval Noah Harari”. It’s an excellent article; however, I’m irritated by the fact that it awakened so many more people (at least in my circles) to the superficiality of Harari’s books, especially Homo Deus and Sapiens, than several other articles published many years ago appear to have managed. These books are seven and 11 years old, respectively – sufficient time for these books to become popular as well as for their problems to have become noticeable. I myself have known for at least seven years that Harari’s history books are full of red flags that signal a lack of engagement with the finer but crucial themes of the topics on which he pontificates. Anyone who has been trained in science or has engaged continuously with matters of science (like science journalists) should have been able to pick up on these red flags. Why didn’t they? Yet the Current Affairs article elicited the sort of response from many people that suggested they were glad to have been alerted to his nonsense.

    To me, this has all been baffling – and symptomatic of the difficult problem of determining who it is that we can learn about good science from without such determination devolving into bad gatekeeping. There are many simple solutions to this difficult problem, of course, but their stake to simplicity is in turn made disingenuous by the fact that the people at large don’t adopt them. So it is that thousands pick up Homo Deus and believe they’re been enlightened science-wise and then, years later, marvel at a reality-check. Some of these solutions: familiarise yourself with the ‘index of evidence’; apply ad verecundiam: trust experts on the specific topic more than, say, a theoretical physicist writing about mRNA vaccines; attribute all claims openly to their firsthand sources; take even mild conflicts of interest very, very seriously (red-flag #2435: Silicon Valley techbros swooned over Harari’s books; the CoI here is that they’re technoptimists, a technocratic ideology that refuses to admit the precepts of basic sociology and thus focuses on dog-whistles); and always act in good faith.

    All such habits of good science, but especially the last one, need to be instilled among all people (and not just scientists and science journalists) over time, so that everyone can communicate good science well. But even then you might not learn that you shouldn’t get your science from Harari or Steven Pinker or others of their ilk, so please remember it now don’t make this mistake again. And I, in turn, will try to stop making the mistake of assuming timeous reader interest on a topic is entirely predictable.

    Featured image: Modified photos of Yuval Noah Harari, March 2017. Credit (original): Daniel Naber/Wikimedia Commons, CC BY-SA 4.0.

  • Dams are bad for rivers. Are skyscrapers bad for winds?

    I was recently in Dubai and often in the shadow of very tall buildings, including the Burj Khalifa and many of its peers on the city’s famed Sheikh Zayed Road. The neighbourhood in which my relatives in the city live has also acquired several new tall apartment buildings in the last decade. My relatives lost their view of the sunrise, sure, but they also lost the wind as and when it blew. And I began to wonder whether, just as dams and gates can kill a river by destroying its natural flow, skyscrapers could distort the wind and consequently the way both people and air pollution are affected by it.

    Wind speed is particularly interesting. When architects design tall buildings, they need to account for the structure’s ability to withstand wind velocity, which increases with altitude and whose effects on the structure also diversify. For example, when a building causes a wind current to split up to either side as it flows past, the current forms vortices on the other side of the building. This phenomenon is called vortex-shedding (see the video below). The formation of these vortices causes the wind pressure around the building to undulate in a way that can sway the building from side to side. Depending on the building’s integrity and design, this can lead to anything from cracked window glass to… well, catastrophe.

    However, it seems such effects – of the wind on buildings – are relatively more popular than the effects of tall buildings on the wind itself. For starters, a building that presents a flat face to oncoming wind can force the wind to scatter across the face (especially if the building is the tallest in the area). So a part of the wind flows upwards along the building, some flows around the sides and some flows downwards. The last one has been known to lead to a downdraughts strong enough to topple standing lorries and move cars.

    The faster the wind, the faster the downdraught. A paper published in December 2019 reported that the average wind speed around the world has been increasing since 2010. The paper was concerned with the effects of this phenomenon on opportunities for wind-based power but it should be interesting to analyse its conclusions vis-à-vis the world’s, including India’s, skyscrapers as well.

    If the streets around the building are too narrow for a sufficient distance, they can further accelerate the downdraught, as well as natural low-altitude winds, turning the paths into deadly wind tunnels. This is due to the Venturi effect. A 1990 study found that trees can help counter it.

    With the exception of Mumbai, most Indian cities don’t yet have the skyscraper density of, say, Singapore, New York or Dubai, but the country is steadily urbanising and its cities’ population densities are on the rise. (Pardon me, I’ve made a rookie mistake: skyscrapers aren’t high density – see this and this. Instead, let me say:) The rich in India are becoming richer, and as cities expand, there’s no reason why more skyscrapers shouldn’t pop up – either as lavish residences for the ultra-wealthy or to accommodate corporate offices. We are all already familiar with an obsession among the powers that be with building increasingly taller structures as a pissing contest.

    The Mumbai skyline as seen from over the Bandra-Worli sea-link. Credit: Editor8220/Wikimedia Commons, CC BY-SA 4.0

    This possibility is encouraged by the fact that most of India’s cities (if not all of them) are semi-planned at best. City officials also seldom enforce building codes. Experts have written about the effects of the latter on Indians’ exposure to hydro/seismological disasters (remember: buildings kill people), but in future, we should expect there to be an effect due to the buildings’ interaction with the wind as well.

    Poorly enforced building codes, especially when helped along by corrupt government, also have the effect of enabling builders to violate floor-safety indices and build structures so tall that they exacerbate water shortage, water pollution, local road traffic, power consumption, etc. The travails of South Usman Road in Chennai, where I lived for many years, come to mind. In fact, it is telling that India’s tallest building, the Palais Royal in Mumbai, has also been beleaguered by litigation over illegalities in its construction. According to a 2012 post on the Structural Engineering Forum of India website, consulting firm RWDI analysed the effects of winds on the Palais Royal but the post has nothing to suggest the reciprocal was also true.

    Remember also that most of India’s cities already have very polluted air (AQI in excess of 200), so we can expect the downdraughts to be foul as well, effectively bringing pollutants down to where the people walk. I’m also similarly concerned about the ability of relatively higher winds to disperse pollutants if they are going to be scattered more often by a higher density of skyscrapers, akin to the concept of a mean free path in physics.

    One thing is for sure: our skyscrapers’ wind problem isn’t just going to blow over.

    Featured image: My photo of the Burj Khalifa in Dubai, August 2022. Not available to reuse.