• +1 800 982 4489
  • sales@watertechnologiescanada.com

Science

Cornell tests smart, resilient underground infrastructure

 

The future looks “smart” for underground infrastructure after a first-of-its-kind experiment testing advanced sensors was conducted June 6 at the Cornell Geotechnical Lifelines Large-Scale Testing Facility. The test was the first use of the advanced sensors for the purpose of monitoring buried infrastructure, and gave an unprecedented look at the pipe’s ability to elongate and bend while being subject to ground failure.

Cornell tests ‘smart,’ resilient underground infrastructure (Cornell Chronicle)

Reloadable, 3D printed tread band from Michelin

The reloadable tread band would be applied by going to a service station equipped with modular print heads and in less time than an oil change, the client will reprint their tires and drive away.

Play Michelin Video

World’s first climate disclosure lawsuit

Pressure is mounting on the private sector to consider climate change risk in annual reports after the world’s first climate disclosure lawsuit was lodged today (8 August).

Lawyers from Environmental Justice Australia (EJA) have filed proceedings on behalf of two shareholders against one of Australia’s top four banks, the Commonwealth Bank (CommBank), for failing to adequately disclose climate risk in the lender’s 2016 annual report.

This oversight means that the bank failed to provide a true and fair view of its financial position and performance, as required by the Corporations Act, the claim alleges. It also seeks an injunction to prevent the bank making the same omissions in future annual reports, and raises concerns about reputations risks to the bank regarding funding required for a proposed coal mine in Queensland.

We believe the matter is of significant public interest,” Environmental Justice Australia lawyer David Barnden said. “It should set an important precedent that will guide other companies on disclosing climate change risks.

New trend

The announcement comes amid mounting pressure for the business community to treat climate change risks as a serious financial problem. Experts suggest that the value at risk, as a result of climate change, to global manageable assets ranges from $4.2trn to $43trn between now and the end of the century. Investors already fear that the next financial crisis will be climate-related.

Earlier this summer, the G2O’s Task Force on Climate-related Financial Disclosures (TCFD) recommended that firms should disclose climate information as part of mainstream financial statements. A host of major companies, including eleven of the world’s top banks, such Barclays and Santander, have since committed to adopt key elements of the TCFD’s new framework.

Commenting on today’s announcement, UK-based environmental law firm ClientEarth said that the case against the CommBank could signal a new trend in climate risk litigation.

With this case, the risk of litigation over poor climate disclosure has become a clear reality for companies,” ClientEarth lawyer Daniel Wiseman said. “It’s unsurprising that investors are demanding companies properly disclose climate change risks – particularly where these companies have clear exposure to the fossil fuel sector. Shareholders will not be content to stand by silently without reassurance that climate risk is being adequately managed.

“Many other countries already have similar disclosure requirements to Australia. In the UK, the Bank of England and other financial regulators have now made clear that financial institutions like banks and insurers should be considering climate risk. To limit exposure to this sort of litigation, business leaders need to get acquainted, and quickly, with their legal duties and with emerging industry standards, like the TCFD recommendations.

Around 60% of the world’s biggest investors are taking steps to protect their portfolios. HSBC has launched a $1bn green bond portfolio aimed at the renewable energy sector, while Goldman Sachs announced it will leverage $150bn into clean energy financing and investments by 2025.

A better way to make holograms

Is by copying butterflies’ wings

HOLOGRAPHY is a useful technology, but somehow faintly disappointing. The fantasy is of a “Star Trek” style holodeck, or even the less ambitious idea of three-dimensional television pictures. The reality, for the man or woman in the street, is smudgy images that act as security features on credit cards, passports and an increasing number of banknotes.

Holography does have many uses beyond this. These include projecting 3D art displays in museums, enabling measurements to be made with great precision using a technique called holographic interferometry, and accurately assessing the three dimensions of packages for shipping companies. But the difference between the high-quality holograms required for those applications and the quotidian credit-card variety is that a laser and special equipment are needed to project them. Indeed, if the hologram is in colour, three lasers are needed, one for each primary: red, green and blue. The result is not always persuasive. Getting the primary holograms to overlap perfectly is hard. And to see the picture usually requires a darkened room.

All this led Rajesh Menon, an engineer at the University of Utah, to start eyeing up butterflies—notably the bright blue morphos found in Central and South America. The striking colour of a morpho’s wings (see picture) is the product not of pigment, but of the structure and arrangement of the scales on those wings. These scales refract light, splitting it into its component wavelengths, and also diffract it, causing those various wavelengths to interfere with one another. As a result, blue wavelengths are intensified and reflected back to the onlooker while those of other colours either cancel each other out or are scattered, and thus minimised. Moreover, unlike today’s holograms, the colour and appearance of a morpho’s wings remain the same, regardless of the angle they are viewed from.

Dr Menon and his team thought mimicking the way morphos refract and diffract light might thus let them create more realistic and usable holograms than today’s.

In a paper just published in Scientific Reports, they describe how they have done this.

A conventional hologram is made by splitting a laser beam in two, scanning one of the half beams over the object to be holographed, recombining the half beams and then capturing an image created by the recombined beams on a photographic film. The result is an interference pattern imprinted on the film by the interaction between the out-of-kilter half beams. Shine light (ideally of the same frequency as the original laser) on this pattern and the process is, in essence, reversed. That produces a 3D representation of the original object.

Dr Menon’s approach differs from this established method in several ways. First, it dispenses with the laser. Second, the film on which the hologram is captured is not a smooth one but, rather, a sheet of transparent plastic with microscopic bumps and grooves in it. Third, the pattern of those bumps and grooves is created not photographically but as the product of calculations by a computer.

Instead of the laser, Dr Menon starts with multiple images, taken from different directions, of the object to be holographed. These can come either from a special, stereoscopic camera or, more prosaically, from a single camera moved around to different vantage points.

These images are then fed into a computer. Here, a special algorithm calculates how to shape the topography of the plastic sheet so that it will manipulate the light eventually used to illuminate that sheet in a way which creates the desired 3D image. In essence, the sheet’s bumps and grooves act like the scales of a morpho’s wings, refracting and diffracting the incident light to produce the desired effect.

Once the computer has calculated the topography needed to do this, that topography (or, rather, its inverse) is inscribed onto a master version using photolithography—a technique also employed to make computer chips. This master may then be used to stamp multiple copies of the hologram, in a similar fashion to that employed to make vinyl records.

Crucially, the result—having been created using ordinary light rather than special laser beams—does not require lasers to recreate the image. A beam of white light will do the trick. Even a torch will work. Using one, Dr Menon can generate holograms with a full spectrum of colours and with a richness which he estimates is up to ten times that of today’s most sophisticated holograms. The new holograms may also be viewed from all angles without distortion. And they cost a fraction of those produced by existing techniques.

For now, Dr Menon and his colleagues are focusing on the kind of holograms used as security features, although they have also created holographic images of 3D objects in free space. Eventually, they hope to make holographic movies, using devices called phase spatial light modulators controlled directly by the output from the hologram-generating algorithm. Such modulators deploy liquid crystals instead of bumps on a surface to manipulate light.

If that idea can be made to work, then fantasies such as holographic television might indeed be brought into being. A more immediate market, though, is replacing existing security holograms with ones that are clearer, harder to forge and viewable from any angle. Perhaps, if Dr Menon has his way, the portraits of heads of state and other worthies on banknotes will soon pop up to greet the user as they are pulled from his wallet.

Genetic testing threatens the insurance industry

The gene is out of the bottle. Insurers worry about adverse selection; the insured worry about discrimination.

IF a genetic test could tell whether you are at increased risk of getting cancer or Alzheimer’s, would you take it? As such tests become more accessible, more and more people are saying “yes”. The insurance industry faces a few headaches as a result.

Once used only for medical reasons, basic predictive genetic tests can now be ordered online for a few hundred dollars. One company, 23andMe, in California, has collected some 4,000 litres of sputum since 2007, enlightening 2m people on their ancestry, health risks and what they may pass on to offspring. In April it received regulatory approval to screen for risk factors connected to ten diseases and genetic conditions, including late-onset Alzheimer’s and Parkinson’s. The ruling could open the floodgates for others to sell direct to consumers.

“Information is power”, argue many who take such tests. But insurers fear that without equal access to such information, they will lose out to savvy customers. Consumer groups, on the other hand, fear that if underwriters did have access to such information, people with “bad” genes might find themselves unfairly excluded from cover. Either way, the scientific advances could well disrupt insurance significantly.

Unlike diagnostic genetic tests, predictive ones are conducted on people without symptoms. The best-known example was provided by Angelina Jolie, an actress who discovered she had a gene mutation that markedly raised her risk of breast cancer. She underwent a double mastectomy.

Tests might influence financial as well as medical decisions. A person at increased risk of dying young may want to buy life insurance. Someone likely to contract cancer may buy cancer or critical-illness cover, which pays a lump sum upon diagnosis. Because predictive tests—unlike diagnostic ones—often need not be disclosed, the customer can secure an advantage over a future insurer.

So underwriters warn that predictive genetic testing could well lead to adverse selection. The New York Times recently reported on a woman who bought long-term care insurance after testing positive for ApoE4, a mutation of a gene related to increased risk of Alzheimer’s. The insurer had tested her memory three times before issuing the policy, but could not know about the genetic result. Robert Green, at Harvard University, found that people told they have the mutation were five times more likely to buy long-term care insurance than those without such information.

Asymmetry of information—when the customer knows more than the insurer—is the industry’s nightmare. If predictive tests further improve and become more common while non-disclosure rules stay in place, some insurance products might eventually die out. Either insurers would go belly-up, or premiums would become prohibitively expensive. Hence, argue some insurers, if the customer knows something relevant about their health, so should the insurer.

But tests might also help insurers. Christoph Nabholz, from Swiss Re, a reinsurance giant, is most excited about tests that spot early signs of cancer or cardiovascular disease. For life and health insurers, who want to keep people alive and well, such information could be invaluable. Discovery, a South African health insurer, plans to offer customers a test that maps part of their genome. The focus is on “actionable data”, where medical intervention or lifestyle change could mitigate risk, explains Jonathan Broomberg from Discovery.

This might help people who are already insured. But it worries those seeking new policies, who fear that underwriters may use predictive information to discriminate. Some might lose access to insurance. This raises ethical questions about when, if ever, genetic discrimination is acceptable. Moreover, since the relative role that genes play in the development of diseases is still being studied, some people might be unfairly and wrongly penalised.

Unpredictability rules

So regulations today often protect consumers from the mandatory disclosure of predictive tests. But the rules are patchy. In Britain the industry has agreed to a blanket moratorium, renewable every three years, on using predictive genetic information. The sole exception is Huntington’s chorea, where a test of one gene is infallible and has to be disclosed to an insurer for life cover worth more than £500,000 ($662,000). In America the Genetic Information Nondiscrimination Act bans health insurers (and employers) from using such results, but is silent on other types of insurance. In several countries life insurers may already ask for disclosure of predictive genetic tests for policies over a certain value.

But testing is rarely cut-and-dried. Ronnie Klein from the Geneva Association, an insurance-industry think-tank, says that, unlike Huntington’s, most illnesses stem from a number of factors, including lifestyle and environment, and a combination of genes. For example, although the ApoE4 allele increases the risk of Alzheimer’s, many without it still get the disease.

Some regulators, such as Germany’s, have outlawed direct-to-consumer tests. But nothing stops Germans from ordering from abroad, and, just as it became normal for life insurers to ask for family history, so insurers will surely eventually have access to relevant genetic information. The question will be what they are allowed to do with it. When blood tests for AIDS first appeared, insurers also fretted about adverse selection. Many jurisdictions ruled they could not be used for calculating health premiums, as these were a basic good, but could be used for life policies. As genetic testing spreads, society and insurers may face many similar difficult assessments.

A Fundamentally New Way of Harnessing Nature – Quantum Computing

David Deutsch, father of quantum computing

I occasionally go down and look at the experiments being done in the basement of the Clarendon Lab, and it’s incredible.

David Deutsch, of the University of Oxford, is the sort of theoretical physicist who comes up with ideas that shock and confound his experimentalist colleagues—and then seems rather endearingly shocked and confounded by what they are doing.

Last year I saw their ion-trap experiment, where they were experimenting on a single calcium atom, he says. The idea of not just accessing but manipulating it, in incredibly subtle ways, is something I totally assumed would never happen. Now they do it routinely.

Such trapped ions are candidates for the innards of eventual powerful quantum computers. These will be the crowning glory of the quantum theory of computation, a field founded on a 1985 paper by Dr Deutsch. He thinks the widely predicted “quantum supremacy” that eventually puts a quantum computation incontrovertibly ahead of a classical one will be momentous for scientists and laymen alike. He brushes off the fervent debate about whether the commercially available D-Wave computer offers a speed advantage.

If it works, it works in a completely different way that cannot be expressed classically. This is a fundamentally new way of harnessing nature. To me, it’s secondary how fast it is.

Still, these are steps towards a powerful, universal quantum computer that could solve a lot of thorny problems. To describe such a device properly is to account not only for the states of each of its constituent bits but also for all the couplings between them, for each is entangled with every other. A good-sized one would maintain and manipulate a number of these states that is greater than the number of atoms in the known universe. For that reason, Dr Deutsch has long maintained that a quantum computer would serve as proof positive of universes beyond the known: the “many-worlds interpretation”.

This controversial hypothesis suggests that every time an event can have multiple quantum outcomes, all of them occur, each “made real” in its own, separate world.

At the same time, quantum computation, and the quantum-mechanical theory from which it springs, are all subsumed in a newer idea that Dr Deutsch is pursuing. He contends that what he calls his “constructor theory” provides a perspective that will lead to the rewriting of physics altogether. As with classical computer science, quantum computation and even genetics, it is based on the role of information.

But rather than letting physical laws define what is and is not possible, as science does now, constructor theory asserts that those laws actually arise from what is and is not possible.

From observed possibilities, a mathematical object called a constructor can be fashioned. Operating with and on these constructors gives rise to what Dr Deutsch reckons is a theory even more fundamental than quantum mechanics. He is enthusiastic about the theory’s potential to upend the very foundations of science, but concedes that testing it experimentally remains a distant possibility. Then again, a few decades ago he would have said the same thing about quantum computers.

A chance finding may lead to a treatment for multiple sclerosis

A neat example of the role of serendipity in science

EXPERIMENTS that go according to plan can be useful. But the biggest scientific advances often emerge from those that do not. Such is the case with a study just reported in the Proceedings of the National Academy of Sciences. When they began it, Hector DeLuca of the University of Wisconsin, Madison, and his colleagues had been intending to examine the effects of ultraviolet (UV) light on mice suffering from a rodent version of multiple sclerosis (MS). By the project’s end, however, they had in their hands two substances which may prove valuable drugs against the illness.

Multiple sclerosis is an autoimmune disease. This means it is caused by a victim’s immune system turning on and destroying parts of his own body. In the case of MS the targets of these attacks, which may continue for years, are the fatty sheaths that insulate nerve cells and thus help nervous impulses to propagate. People suffering from MS are often weakened, and sometimes physically disabled by it, and may also become blind.

What drives the immune system to behave in this way remains mysterious, but in the 1970s researchers uncovered a promising clue when they noticed that MS is rarer near the equator than it is at high latitudes. The first hypothesis proposed to explain this observation was that vitamin D (a substance created by sunlight’s action on precursor molecules in the skin) might be helping to prevent MS. That made sense, since those living in the tropics receive more sunlight than do those in temperate zones. Sadly, follow-up experiments failed to support the notion. Those experiments did, though, lead Dr DeLuca to discover that the preventive effect is associated with a particular sort of sunlight—UV with a wavelength of between 300 and 315 nanometres (billionths of a metre).

His latest experiment was intended to dig deeper into this observation, by using this type of light to irradiate mice that had been injected with chemicals known to cause the rodent equivalent of MS. In a preliminary study he and his colleagues therefore shaved the backs of 12 of these mice and exposed them to UV of the appropriate wavelength every day for a month. To be useful, an experiment like this needs controls with which its results can be compared. Dr DeLuca devised three of these. In one, he applied one of six types of sunscreen to a dozen other shaved mice before exposing them to the ultraviolet rays. To another dozen he applied the sunscreen but not the ultraviolet. And a final 12, though also shaved, were neither exposed to UV nor slathered with sunscreen. He then monitored all four groups for signs of murine multiple sclerosis, such as loss of tail tone, unsteady gait and limb paralysis,

When the experiment began, he and his colleagues expected that the disease would progress more slowly in the experimental group than in the control groups, and that its rate of progress in all three control groups would be the same, since any effect of exposure to ultraviolet would be negated by the sunscreen. But that was not what happened. Instead, three of the six types of sunscreen served to suppress the disease’s progression by themselves—that is, even in animals not exposed to UV. Indeed, one of them, Coppertone, was as effective at doing so as ultraviolet light alone.

In light of this Dr DeLuca and his colleagues carried out further experiments, which confirmed the initial findings. They also studied the ingredients lists of the three protective sunscreens and tested each of the compounds therein, one at a time, on other batches of mice. This revealed that two of these compounds, homosalate and octisalate, were particularly effective at keeping the rodent version of multiple sclerosis in check.

Why these particular substances suppress MS remains to be discovered. Dr DeLuca suspects that it has to do with their ability to inhibit production of cyclooxygenase, an enzyme commonly found in the lesions characteristic of multiple sclerosis. But regardless of the mechanism, if homosalate and octisalate, or other molecules similar to them, can suppress the progression of the disease in people as effectively as they do in rodents it will be a signal example both of the role of serendipity in science and of the crucial importance of doing proper controls.

1