Tuesday, January 31, 2017

The new supercomputer “Minerva” has been put into operation at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI).

The new supercomputer “Minerva” has been put into operation at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute, AEI). 
With 9,504 compute cores, 38 TeraByte memory and a peak performance of 302.4 TeraFlop/s it is more than six times as powerful as its predecessor. The scientists of the department “Astrophysical and Cosmological Relativity” can now compute significantly more gravitational waveforms and also carry out more complex simulations.

Minerva is to solve Einstein’s equations

Above all, the new computer cluster – named after the Roman goddess of wisdom – is used for the calculation of gravitational waveforms. These ripples in space time – measured for the first time directly in September 2015 – originate when massive objects such as black holes and neutron stars merge. Obtaining the exact forms of the emitted gravitational waves requires numerically solving Einstein’s complicated, non-linear field equations on supercomputers like Minerva. The AEI has been at the forefront of this field for many years and its researchers have been making important contributions to the software tools of the trade.

Tracking down faint signals in the detectors’ background noise and inferring information about astrophysical and cosmological properties of their sources requires calculating the mergers of many different binary systems such as binary black holes or pairs of a neutron star and a black hole, with different combinations of mass ratios and individual spins.

“Such calculations need a lot of compute power and are very time-consuming. The simulation of the first gravitational wave measured by LIGO lasted three weeks – on our previous supercomputer Datura,” says AEI director Professor Alessandra Buonanno. “Minerva is significantly faster and so we can now react even quicker to new detections and can calculate more signals.”

Ready for the gravitational wave detectors’ second science run
The gravitational wave detectors Advanced LIGO in the USA (aLIGO) and GEO600 in Ruthe near Hanover started their second observational run (“O2”) on 30 November 2016. aLIGO is now more sensitive than ever before: The detectors will be able to detect signals from about 20% further away compared to O1, which increases the event rate by more than 70%.


Credit : mpg.de

Numerical simulation of the gravitational-wave event GW151226 associated to a binary black-hole coalescence. The strength of the gravitational wave is indicated by elevation as well as color, with cyan indicating weak fields and orange indicating strong fields. The sizes of the black holes as well as the distance between the two objects is increased by a factor of two to improve visibility. The colors on the black holes represent their local deformation due to their intrinsic rotation (spin) and tides.


 Numerical-relativistic Simulation: S. Ossokine , A. Buonanno (Max Planck Institute for Gravitational Physics) and the Simulating eXtreme Spacetime project; scientific visualization: T. Dietrich, R. Haas (Max Planck Institute for Gravitational Physics)

Researchers in the Astrophysical and Cosmological Relativity division at AEI have improved the capabilities of aLIGO detectors to observe and estimate parameters of gravitational-wave sources ahead of O2. For the search for binary black hole mergers, they have refined their waveform models using a synergy between numerical and analytical solutions of Einstein’s equations of general relativity. They calibrated approximate analytical solutions (which can be computed almost instantly) with precise numerical solutions (which take very long even on powerful computers). This allows the AEI researchers to use the available computing power more effectively and to search more quickly and discover more potential signals from merging black holes in O2, and to determine the nature of their sources. AEI researchers also have prepared simulations of merging neutron star and boson star binaries. These can be simultaneously observed in electromagnetic and gravitational radiation, and can provide new precise tests of Einstein’s theory of general relativity.

Other articles on the same theme:








Story source: 
The above post is reprinted from materials provided by Mpg . Note: Materials may be edited for content and length.

Researchers are close to discover the factor that determined the evolution of life on Earth

Credit: klss/Shutter Stock
Modern science has advanced significantly over the last couple of decades. We’ve managed to answer several of the world’s most long-standing questions, but some answers have continued to elude today’s scientists, including how life first emerged from Earth’s primordial soup.

However, a collaboration of physicists and biologists in Germany may have just found an explanation to how living cells first evolved.

In 1924, Russian biochemist Alexander Oparin proposed the idea that the first living cells could have evolved from liquid droplet protocells.

He believed these protocells could have acted as naturally forming, membrane-free containers that concentrated chemicals and fostered reactions.

Aleksandr Oparin (right) and Andrei Kursanov in the enzymology laboratory, 1938 Credit: wikipedia

In their hunt for the origin of life, a team of scientists from the Max Planck Institute for the Physics of Complex Systems and the Institute of Molecular Cell Biology and Genetics, both in Dresden, drew from Oparin’s hypothesis by studying the physics of 'chemically active' droplets (droplets that cycle molecules from the fluid in which they are surrounded).

Unlike a 'passive' type of droplet - like oil in water, which will just continue to grow as more oil is added to the mix - the researchers realised that chemically active droplets grow to a set size and then divide on their own accord.

This behaviour mimics the division of living cells and could, therefore, be the link between the nonliving primordial liquid soup from which life sprung and the living cells that eventually evolved to create all life on Earth.

"It makes it more plausible that there could have been a spontaneous emergence of life from nonliving soup," said Frank Jülicher, co-author of the study that appeared in the journal Nature Physics in December.

It’s an explanation of "how cells made daughters," said lead researcher David Zwicker. "This is, of course, key if you want to think about evolution."


Add a droplet of life

Some have speculated that these proto-cellular droplets might still be inside our system "like flies in life’s evolving amber".

To explore that hypothesis, the team studied the physics of centrosomes, which are organelles active in animal cell division that seem to behave like droplets.

Zwicker modelled an 'out-of-equilibrium' centrosome system that was chemically active and cycling constituent proteins continuously in and out of the surrounding liquid cytoplasm.

The proteins behave as either soluble (state A) or insoluble (state B).  An energy source can trigger a state reversal, causing the protein in state A to transform into state B by overcoming a chemical barrier. 

As long as there was an energy source, this chemical reaction could happen.

"In the context of early Earth, sunlight would be the driving force," Jülicher said.

Odarin famously believed that lighting strikes or geothermal activity on early Earth could’ve triggered these chemical reactions from the liquid protocells.

This constant chemical influx and efflux would only counterbalance itself, according to Zwicker, when a certain volume was reached by the active droplet, which would then stop growing.

Typically, the droplets could grow to about tens or hundreds of microns, according to Zwicker’s simulations. That’s about the same scale as cells.

The next step is to identify when these protocells developed the ability to transfer genetic information.

Jülicher and his colleagues believe that somewhere along the way, the cells developed membranes, perhaps from the crusts they naturally develop out of lipids that prefer to remain at the intersection of the droplet and outside liquid.

Credit: Lucy Reading-Ikkanda/Quanta Magazine
As a kind of protection for what’s within the cells, genes could’ve begun coding for these membranes. But knowing anything for sure still depends on more experiments.

So, if the very complex life on Earth could have begun from something as seemingly inconspicuous as liquid droplets, perhaps the same could be said of possible extraterrestrial life?

In any case, this research could help us understand how life as we know it started from the simplest material and how the chemical processes that made our lives possible emerged from these.

The energy and time it took for a protocell to develop into a living cell, and the living cells into more complex parts, until finally developing into an even more complex organism is baffling.

The process itself took billions of years to happen, so it’s not surprising we need some significant time to fully understand it.

Other articles on the same theme:



Story source: 
 
The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

Monday, January 30, 2017

Love and marriage in medieval England

A medieval couple being married by a clergyman. Central miniature, folio 102v. Book IV by Henricus von Assia (13th century). Chapter Archive of Tarazona, Spain. (Photo by PHAS/UIG via Getty Images)




Getting married in the medieval period was incredibly simple for Christians living in western Europe – all they had to do was say their “I do’s” to each other. But, as Sally Dixon-Smith reveals, proving that you were actually married and had not tripped up on the many potential ‘impediments’ to marriage might be another thing altogether

Medieval marriage practice continues to influence ceremonies today – from banns the reading three times of your intention to marry to declaring vows in the present tense. Indeed, the word ‘wedding’ itself even dates from the period.  However, some things were very different…


In the Middle Ages, getting married was easy for Christians living in western Europe. According to the church, which created and enforced marriage law, couples didn’t need the permission of their families or a priest to officiate. However, while tying the knot could take a matter of moments, proving that you were wed often proved difficult. 

Although the church controlled – or tried to control – marriage, couples did not need to marry in a church. Legal records show people getting married on the road, down the pub, round at friends’ houses or even in bed. All that was required for a valid, binding marriage was the consent of the two people involved. In England some people did marry near churches to give greater spiritual weight to proceedings, often at the church door (leading to some rather fabulous church porches being added to earlier buildings), but this still did not necessarily involve a priest.  

Marriage was the only acceptable place for sex and as a result Christians were allowed to marry from puberty onwards, generally seen at the time as age 12 for women and 14 for men. Parental consent was not required. When this law finally changed in England in the 18th century, the old rules still applied in Scotland, making towns just over the border, such as Gretna Green, a destination for English couples defying their families. 


Although the medieval church upheld freely given consent as the foundation of marriage, in practice families and social networks usually had a great deal of influence over the choice and approval of marriage partners. It was also normal at all levels of society to make some ‘pre-nup’ arrangements to provide for widow- and widowerhood and for any children. It was also expected that everyone would seek the permission of their lord, and kings consulted over their own and their children’s marriages. Marriage between people of different classes was particularly frowned on. 


The wedding of saints Joachim and Anne, considered to be the parents of Mary, the mother of God. Codex of Predis (1476). (Photo by Prisma/UIG/Getty Images)
There were various ways in which a medieval couple could use words or actions to create a marriage. Consent to marry could be given verbally by ‘words of present consent’ – no specific phrase or formula was required. A ‘present consent’ marriage did not have to be consummated in order to count. However, if the couple had agreed to get married at some point in the future and then had sex, this was seen as a physical expression of present consent. 

So, for engaged couples, having sex created a legally binding marriage. Consent could also be shown by giving and receiving an item referred to English as a ‘wed’. A ‘wed’ could be any gift understood by those involved to mean consent to marry but was often a ring.  A ‘wedding’ where a man gave a woman a ring and she accepted it created the marriage. 

It is clear that there were misunderstandings. It could be difficult to know if a couple was married and they might even not agree themselves. The statutes issued by the English church in 1217–19 include a warning that no man should “place a ring of reeds or another material, vile or precious, on a young woman's hands in jest, so that he might more easily fornicate with them, lest, while he thinks himself to be joking, he pledge himself to the burdens of matrimony”. The vast majority of marriage cases that came up before the courts were to enforce or prove that a marriage had taken place.

Marriage mix-ups bothered the clergy since, after much debate, theologians had decided in the 12th century that marriage was a holy sacrament. The union of a man and a woman in marriage and sex represented the union of Christ and the church, and this was hardly symbolism to be taken lightly. 

As God was the ultimate witness, it was not necessary to have a marriage witnessed by other people – though it was highly recommended to avoid any uncertainty. There was also a church service available, but it was not mandatory and the evidence suggests that only a minority married in church. Many of those couples were already legally married by word or deed before they took their vows in front of a priest.  


Divorce as we understand it today did not exist. The only way to end a marriage was to prove it had not legally existed in the first place. Christians could only be married to one person at a time and it was also bigamy if someone bound to the church by a religious vow got married. As well as being single and vow-free, you also had to be marrying a fellow Christian. Breaking these rules automatically invalidated the marriage.


The marriage feast at Cana, early 14th century. Below, in an initial letter 'S', the throwing overboard and casting up of Jonah. From the Queen Mary Psalter, produced in England. Illustration from School of Illumination, reproductions from manuscripts in the British Museum, Part III, English 1300 to 1350, (British Museum, Longmans, Green and Co, London, 1921). (Photo by The Print Collector/Print Collector/Getty Images)
There were also a number of other ‘impediments’ that should prevent a marriage going ahead, but might be waived in certain circumstances if the marriage had already taken place. Couples who were already related were not to marry. The definition of ‘family’ was very broad. Before 1215, anyone with a great-great-great-great-great-grandparent in common was too closely related to get married. As this rule was hard to enforce and subject to abuse – the sudden discovery of a long-lost relative might conveniently end a marriage – the definitions of incest were changed by the Fourth Lateran Council in 1215, reduced to having a great-great-grandparent in common. 

As well as blood kinship, other ties could also prohibit marriage. For instance, godparents and godchildren were not allowed to marry as they were spiritually related, and close ‘in-laws’ were also a ‘no-no’.

Reading the ‘banns’ was introduced as part of the 1215 changes to try to flush out any impediments before a marriage took place. Nevertheless, until the Reformation there was no ‘speak now or forever hold your peace’. 


It is difficult to know how many medieval people married for love or found love in their marriage. There was certainly a distinction between free consent to marry and having a completely free choice. What is clear is that the vast majority of medieval people did marry and usually remarried after they were widowed, suggesting that marriage was desirable, if only as the social norm.

Other articles on the same theme:





Story source: 
The above post is reprinted from materials provided by Historyextra . Note: Materials may be edited for content and lengt

Researchers have found another planet (Wolf 1061c) that can sustain life: Located just 14 light-years away

Credit: The Wolf 1061 system. Credit: UNSW Sydney
An exoplanet with the prime conditions for life could be located just 14 light-years away, scientists report, in one of the closest neighbouring solar systems to our own.

New research suggests that a planet circling the star Wolf 1061 falls within what's called the star's habitable zone - making it one of the most likely neighbouring candidates for a planet that supports life.


This artist's concept illustrates a young, red dwarf star surrounded by three planets. Credit: wikipedia

"The Wolf 1061 system is important, because it is so close, and that gives other opportunities to do follow-up studies to see if it does indeed have life," says lead researcher Stephen Kane from San Francisco State University.

There are three planets orbiting Wolf 1061, but the planet Wolf 1061c is of particular interest.

Discovered in 2015, and with an estimated mass that's more than four times Earth's mass, Wolf 1061c is located right in the middle of Wolf 1061's habitable zone: the region where a planet's distance from its host star makes conditions suitable for liquid water and other life-supporting elements.

Our own Solar System runs by the same rules: conditions on Earth are just right for liquid water, whereas Mars is too cold.

To investigate whether Wolf 1061c might offer the same kind of habitability, the researchers analysed seven years of luminosity data from its host star and ran calculations of the exoplanet's orbit to figure out what the temperature and pressure on the surface could be.


The findings add weight to previous speculation that Wolf 1061c could be habitable – but just because the exoplanet is within a habitable zone, that doesn't necessarily mean it's one like Earth's.

The new data suggest that Wolf 1061c could have an atmosphere similar to what Venus had in its earliest days, meaning that any liquid water on the planet might not stick around for long. 

Previous research has suggested that high temperatures caused excessive water evaporation on Venus, and the newly formed water vapour in the atmosphere increased temperatures even further - a process known as a runaway greenhouse effect.

Now, the team thinks the same thing could be happening on Wolf 1061c, which is "close enough to the star that it's looking suspiciously like a runaway greenhouse", says Kane.

In addition, Wolf 1061c's orbit of its star varies much more quickly than Earth's orbit of the Sun, which would lead to chaotic climate changes such as a rapidly encroaching ice age (or warm phase).

So, is there life on Wolf 1061c?

We don't yet know, and to find out, we'll need more detailed measurements than what we have so far. To that end, Kane says NASA's James Webb telescope is one of the ways we'll be able to learn more about the exoplanet in the future.

Wolf 1061c Credit: Centauri Dreams

The telescope is launching next year, and its advanced optics should be able to reveal the atmospheric conditions on Wolf 1061c, and give us a better idea about whether water (and life) could really exist there.

Meanwhile, scientists from METI - the Messaging Extraterrestrial Intelligence organisation - are also interested in Wolf 1061c, and have been keeping a close eye on the exoplanet as they try to reach out to any alien life that might exist beyond our Solar System.

"I'm not holding my breath that we'll ever find evidence of life on Wolf 1061c," METI president Doug Vakoch told Rae Paoletta at Gizmodo.

"But the fact that there's a roughly Earth-like planet in the habitable zone of a star so close to our own Solar System is a good omen as we continue our search for life on other planets."

Other articles on the same theme:




Story source: 
The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

Sunday, January 29, 2017

Earth is Flat, Vaccines are bad and Global Warming is a myth. What makes people reject scientific research?

Credit: JooJoo41/Pixabay
A lot happened in 2016, but one of the biggest cultural shifts was the rise of fake news - where claims with no evidence behind them (e.g. the world is flat) get shared as fact alongside evidence-based, peer-reviewed findings (e.g. climate change is happening).

Researchers have coined this trend the 'anti-enlightenment movement', and there's been a lot of frustration and finger-pointing over who or what's to blame. But a team of psychologists has identified some of the key factors that can cause people to reject science - and it has nothing to do with how educated or intelligent they are.

In fact, the researchers found that people who reject scientific consensus on topics such as climate change, vaccine safety, and evolution are generally just as interested in science and as well-educated as the rest of us.

City climate change Credit: NASA Climate Change

The issue is that when it comes to facts, people think more like lawyers than scientists, which means they 'cherry pick' the facts and studies that back up what they already believe to be true.

So if someone doesn't think humans are causing climate change, they will ignore the hundreds of studies that support that conclusion, but latch onto the one study they can find that casts doubt on this view. This is also known as cognitive bias. 

"We find that people will take a flight from facts to protect all kinds of belief including their religious belief, their political beliefs, and even simple personal beliefs such as whether they are good at choosing a web browser," said one of the researchers, Troy Campbell from the University of Oregon.

"People treat facts as relevant more when the facts tend to support their opinions. When the facts are against their opinions, they don't necessarily deny the facts, but they say the facts are less relevant."

This conclusion was based on a series of new interviews, as well as a meta-analysis of the research that's been published on the topic, and was presented in a symposium called over the weekend as part of the Society for Personality and Social Psychology annual convention in San Antonio.

The goal was to figure out what's going wrong with science communication in 2017, and what we can do to fix it. 

The research has yet to be published, so isn't conclusive, but the results suggest that simply focussing on the evidence and data isn't enough to change someone's mind about a particular topic, seeing as they'll most likely have their own 'facts' to fire back at you. 

"Where there is conflict over societal risks - from climate change to nuclear-power safety to impacts of gun control laws, both sides invoke the mantel of science," said one of the team, Dan Kahan from Yale University.

Instead, the researchers recommend looking into the 'roots' of people's unwillingness to accept scientific consensus, and try to find common ground to introduce new ideas.

So where is this denial of science coming from? A big part of the problem, the researchers found, is that people associate scientific conclusions with political or social affiliations.

New research conducted by Kahan showed that people have actually always cherry picked facts when it comes to science - that's nothing new. But it hasn't been such a big problem in the past, because scientific conclusions were usually agreed on by political and cultural leaders, and promoted as being in the public's best interests. 

Now, scientific facts are being wielded like weapons in a struggle for cultural supremacy, Kahan told Melissa Healy over at the LA Times, and the result is a "polluted science communication environment". 

So how can we do better? 

"Rather than taking on people's surface attitudes directly, tailor the message so that it aligns with their motivation," said Hornsey. "So with climate skeptics, for example, you find out what they can agree on and then frame climate messages to align with these."

The researchers are still gathering data for a peer-reviewed publication on their findings, but they presented their work to the scientific community for further dissemination and discussion in the meantime.

Hornsey told the LA Times that the stakes are too high to continue to ignore the 'anti-enlightenment movement'.

"Anti-vaccination movements cost lives," said Hornsey. "Climate change skepticism slows the global response to the greatest social, economic and ecological threat of our time."

"We grew up in an era when it was just presumed that reason and evidence were the ways to understand important issues; not fear, vested interests, tradition or faith," he added.

"But the rise of climate skepticism and the anti-vaccination movement made us realise that these enlightenment values are under attack."

Other articles on the same theme:






Story source: 
 
The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

10,000 years ago, the Sahara Desert was one of the wettest areas on Earth

Rainier conditions than previously thought turned the Sahara Desert into grasslands, lakes and rivers from 11,000 to 5,000 years ago, a new study finds. A brief return to aridity around 8,000 years ago set the stage for cattle herders to spread across North Africa, researchers suspect.























Updated 09/05/2020

Study shows the Sahara swung between lush and desert conditions every 20,000 years, in sync with monsoon activity


The Sahara desert is one of the harshest, most inhospitable places on the planet, covering much of North Africa in some 3.6 million square miles of rock and windswept dunes. But it wasn't always so desolate and parched. Primitive rock paintings and fossils excavated from the region suggest that the Sahara was once a relatively verdant oasis, where human settlements and a diversity of plants and animals thrived. Notes phys.org

Thousands of years ago, it didn’t just rain on the Sahara Desert. It poured.

Camp in the Sahara Desert at Merzouga, Morocco in North Africa 123RF.com

Grasslands, trees, lakes and rivers once covered North Africa’s now arid
, unforgiving landscape. From about 11,000 to 5,000 years ago, much higher rainfall rates than previously estimated created that “Green Sahara,” say geologist Jessica Tierney of the University of Arizona in Tucson and her colleagues. Extensive ground cover, combined with reductions of airborne dust, intensified water evaporation into the atmosphere, leading to monsoonlike conditions, the scientists report January 18 in Science Advances.


Study shows the Sahara swung between lush and desert conditions Phys.org 


Tierney’s team reconstructed western Saharan rainfall patterns over the last 25,000 years. Estimates relied on measurements of forms of carbon and hydrogen in leaf wax recovered from ocean sediment cores collected off the Sahara’s west coast. Concentrations of these substances reflected ancient rainfall rates.


Credit: Boing Boing

Rainfall ranged from 250 to 1,670 millimeters annually during Green Sahara times, the researchers say. Previous estimates — based on studies of ancient pollen that did not account for dust declines — reached no higher than about 900 millimeters. Saharan rainfall rates currently range from 35 to 100 millimeters annually.

Leaf-wax evidence indicates that the Green Sahara dried out from about 8,000 to at least 7,000 years ago before rebounding. That’s consistent with other ancient climate simulations and with excavations suggesting that humans temporarily left the area around 8,000 years ago. Hunter-gatherers departed for friendlier locales, leaving cattle herders to spread across North Africa once the Green Sahara returned (SN Online: 6/20/12), the investigators propose. 

Other articles on the same theme:








Story source:

The above post is reprinted from materials provided by Sciencenews . Note: Materials may be edited for content and length.

Artificial Intelligence Used to ID Skin Cancer. Deep learning algorithm does as well as dermatologists in identifying skin cancer

A dermatologist using a dermatoscope, a type of handheld microscope, to look at skin. Computer scientists at Stanford have created an artificially intelligent diagnosis algorithm for skin cancer that matched the performance of board-certified dermatologists. Credit: Matt Young
It's scary enough making a doctor's appointment to see if a strange mole could be cancerous. Imagine, then, that you were in that situation while also living far away from the nearest doctor, unable to take time off work and unsure you had the money to cover the cost of the visit. In a scenario like this, an option to receive a diagnosis through your smartphone could be lifesaving.

Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They made a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.

"We realized it was feasible, not just to do something well, but as well as a human dermatologist," said Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory. "That's when our thinking changed. That's when we said, 'Look, this is not just a class project for students, this is an opportunity to do something great for humanity.'"

The final product, the subject of a paper in the Jan. 25 issue of Nature, was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.

Why skin cancer

Every year there are about 5.4 million new cases of skin cancer in the United States, and while the five-year survival rate for melanoma detected in its earliest states is around 97 percent, that drops to approximately 14 percent if it's detected in its latest stages. Early detection could likely have an enormous impact on skin cancer outcomes.

Diagnosing skin cancer begins with a visual examination. A dermatologist usually looks at the suspicious lesion with the naked eye and with the aid of a dermatoscope, which is a handheld microscope that provides low-level magnification of the skin. If these methods are inconclusive or lead the dermatologist to believe the lesion is cancerous, a biopsy is the next step.

Bringing this algorithm into the examination process follows a trend in computing that combines visual processing with deep learning, a type of artificial intelligence modeled after neural networks in the brain. Deep learning has a decades-long history in computer science but it only recently has been applied to visual processing tasks, with great success. The essence of machine learning, including deep learning, is that a computer is trained to figure out a problem rather than having the answers programmed into it.

"We made a very powerful machine learning algorithm that learns from data," said Andre Esteva, co-lead author of the paper and a graduate student in the Thrun lab. "Instead of writing into computer code exactly what to look for, you let the algorithm figure it out."

The algorithm was fed each image as raw pixels with an associated disease label. Compared to other methods for training algorithms, this one requires very little processing or sorting of the images prior to classification, allowing the algorithm to work off a wider variety of data.

From cats and dogs to melanomas and carcinomas

Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.

"There's no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own," said Brett Kuprel, co-lead author of the paper and a graduate student in the Thrun lab. "We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy -- the labels alone were in several languages, including German, Arabic and Latin."

After going through the necessary translations, the researchers collaborated with dermatologists at Stanford Medicine, as well as Helen M. Blau, professor of microbiology and immunology at Stanford and co-author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions representing over 2,000 different diseases.

During testing, the researchers used only high-quality, biopsy-confirmed images provided by the University of Edinburgh and the International Skin Imaging Collaboration Project that represented the most common and deadliest skin cancers -- malignant carcinomas and malignant melanomas. The 21 dermatologists were asked whether, based on each image, they would proceed with biopsy or treatment, or reassure the patient. The researchers evaluated success by how well the dermatologists were able to correctly diagnose both cancerous and non-cancerous lesions in over 370 images.

The algorithm's performance was measured through the creation of a sensitivity-specificity curve, where sensitivity represented its ability to correctly identify malignant lesions and specificity represented its ability to correctly identify benign lesions. It was assessed through three key diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In all three tasks, the algorithm matched the performance of the dermatologists with the area under the sensitivity-specificity curve amounting to at least 91 percent of the total area of the graph.

An added advantage of the algorithm is that, unlike a person, the algorithm can be made more or less sensitive, allowing the researchers to tune its response depending on what they want it to assess. This ability to alter the sensitivity hints at the depth and complexity of this algorithm. The underlying architecture of seemingly irrelevant photos -- including cats and dogs -- helps it better evaluate the skin lesion images.

Health care by smartphone

Although this algorithm currently exists on a computer, the team would like to make it smartphone compatible in the near future, bringing reliable skin cancer diagnoses to our fingertips.

"My main eureka moment was when I realized just how ubiquitous smartphones will be," said Esteva. "Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?"

The team believes it will be relatively easy to transition the algorithm to mobile devices but there still needs to be further testing in a real-world clinical setting.

"Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients," said Susan Swetter, professor of dermatology and director of the Pigmented Lesion and Melanoma Program at the Stanford Cancer Institute, and co-author of the paper. "However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike."

Even in light of the challenges ahead, the researchers are hopeful that deep learning could someday contribute to visual diagnosis in many medical fields.

Other articles on the same theme:




Story source: 
The above post is reprinted from materials provided by Sciencedaily . Note: Materials may be edited for content and length.

A tremendous wealth of the Vikings was discovered on a small island in the Baltic Sea. The entire Swedish was much poorer

Riches found on the island of Gotland. Credit: Gabriel Hildebrand / The Royal Coin Cabinet























Stratification did increase on the island as time passed, though. Archaeologists have found that, throughout the ninth and tenth centuries, silver hoards were distributed throughout Gotland, suggesting that wealth was more or less uniformly shared among the island’s farmers. But around 1050, this pattern shifted. “In the late eleventh century, you start to have fewer hoards overall, but, instead, there are some really massive hoards, usually found along the coast, containing many, many thousands of coins,” says Jonsson. This suggests that trading was increasingly controlled by a small number of coastal merchants.

This stratification accelerated near the end of the Viking Age, around 1140, when Gotland began to mint its own coins, becoming the first authority in the eastern Baltic region to do so. “Gotlandic coins were used on mainland Sweden and in the Baltic countries,” says Majvor Östergren, an archaeologist who has studied the island’s silver hoards. Whereas Gotlanders had valued foreign coins based on their weight alone, these coins, though hastily hammered out into an irregular shape, had a generally accepted value. More than eight million of these early Gotlandic coins are estimated to have been minted between 1140 and 1220, and more than 22,000 have been found, including 11,000 on Gotland alone.


(Nanouschka Myrberg Burström)An example of one of the earliest silver coins minted on Gotland (obverse, left; reverse, right) dates from around 1140.
Gotland is thought to have begun its coinage operation to take advantage of new trading opportunities made possible by strife among feuding groups on mainland Sweden and in western Russia. This allowed Gotland to make direct trading agreements with the Novgorod area of Russia and with powers to the island’s southwest, including Denmark, Frisia, and northern Germany. Gotland’s new coins helped facilitate trade between its Eastern and Western trading partners, and brought added profits to the island’s elite through tolls, fees, and taxes levied on visiting traders. In order to maintain control over trade on the island, it was limited to a single harbor, Visby, which remains the island’s largest town. As a result, the rest of Gotland’s trading harbors, including Fröjel, declined in importance around 1150.

Gotland remained a wealthy island in the medieval period that followed the Viking Age, but, says Carlsson, “Gotlanders stopped putting their silver in the ground. Instead, they built more than 90 stone churches during the twelfth and thirteenth centuries.” Although many archaeologists believe that the Gotland Vikings stashed their wealth in hoards for safekeeping, 

Carlsson thinks that, just as did the churches that were built later, they served a devotional purpose. In many cases, he argues, hoards do not appear to have been buried in houses but rather atop graves, roads, or borderlands. Indeed, some were barely buried at all because, he argues, others in the community knew not to touch them. “These hoards were not meant to be taken up,

” he says, “because they were meant as a sort of sacrifice to the gods, to ensure a good harvest, good fortune, or a safer life.” 

In light of the scale, sophistication, and success of the Gotland Vikings’ activities, these ritual depositions may have seemed to them a small price to pay.

Other articles on the same theme:




Story source: 
The above post is reprinted from materials provided by Archaeology . Note: Materials may be edited for content and length.

Saturday, January 28, 2017

This fascinating periodic table shows the origin of each atom in the human body. "We are made of stardust"

Credit: Jennifer Johnson
Here’s something to think about: the average adult human is made up of 7,000,000,000,000,000,000,000,000,000 (7 octillion) atoms, and most of them are hydrogen - the most common element in the Universe, produced by the Big Bang 13.8 billion years ago.

The rest of those atoms were forged by ancient stars merging and exploding billions of years after the formation of the Universe, and a tiny amount can be attributed to cosmic rays - high-energy radiation that mostly originates from somewhere outside the Solar System.

As astronomer Carl Sagan once said in an episode of Cosmos, "The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of stardust."

To give you a better idea of where the ingredients for every living human came from, Jennifer A. Johnson, an astronomer at the Ohio State University, put together this new periodic table that breaks down all the elements according to their origin:

Jennifer Johnson
To keep things relevant for the human body, Johnson explains that she cut a number of elements from the bottom section.

"Tc, Pm, and the elements beyond U do not have long-lived or stable isotopes. I have ignored the elements beyond U in this plot, but not including Tc and Pm looked weird, so I have included them in grey," Johnson explains on her blog with the Sloan Digital Sky Survey.

The new periodic table builds on work Johnson and her colleague, astronomer Inese Ivans from the University of Utah, did back in 2008 - a project born out of equal measures of frustration and procrastination.

"This is what happens when you give two astronomers, who are tired of reminding everyone about which elements go with which process on a periodic table, a set of markers, and time when they should have been listening to talks," Johnson admits.

The periodic table works by identifying the six sources of elements in our bodies, and breaks them down into the processes in the Universe that can give rise to new atoms: Big Bang fusion; cosmic ray fission; merging neutron stars; exploding massive stars; dying low mass stars; and exploding white dwarf.

The way the corresponding colours fill up the boxes of elements shows roughly how much of that element is the result of the various cosmic events.

So you can see that elements like oxygen (O), magnesium (Mg), and sodium (Na), resulted from gigantic explosions of massive stars called supernovae, which occur at the end of a star's life, when it either runs out of fuel, or accumulates too much matter.

The incredible amount of energy and neutrons this releases allows elements to be produced - a process known as nucleosynthesis - and distributed throughout the Universe.

Old favourites like carbon (C) and nitrogen (N), on the other hand, exist mostly thanks to low-mass stars ending their lives as white dwarfs. 

Strange elements boron (B) and beryllium (Be), and some isotopes of lithium (Li) are unique in their origins, because they're the result of high-energy particles called cosmic rays that zoom through our galaxy at close to the speed of light.

Most cosmic rays originate from outside the Solar System, and sometimes even the Milky Way, and when they collide with certain atoms, they give rise to new elements. 

Interestingly, lithium is part of the reason why Johnson decided to distribute this new periodic table in the first place. If it's giving you a serious case of deja vu, it's because there's a similar version on Wikipedia:


Jennifer Johnson
But, as Johnson explains, the Wikipedia version is unclear in some places, and just plain wrong in others.

She says the "large stars" and "small stars" in the Wikipedia version don't make much sense, because nucleosynthesis has nothing to do with the radius of the stars, so we have to assume they mean "high-mass stars" and "low-mass stars", respectively. 

"High-mass stars end their lives (at least some of the time) as core-collapse supernovae. Low-mass stars usually end their lives as white dwarfs," says Johnson.

"But sometimes, white dwarfs that are in binary systems with another star get enough mass from the companion to become unstable and explode as so-called Type-Ia supernovae. Which 'supernova' is being referred to in the Wikipedia graphic is not clear."


Head over to Johnson's blog to access a higher resolution version of the periodic table, and if you need a colour blind-friendly version, she's got you covered:


Jennifer Johnson


Other articles on the same theme: