Showing posts with label study. Show all posts
Showing posts with label study. Show all posts

Saturday, February 18, 2017

The discovery that overturns all knowledge of geography. It was found a new continent Called 'Zealandia'

Topography of Zealandia. The linear ridges running north-northeast and southwest away from New Zealand are not considered part of the continental fragment, nor are Australia (upper left), Fiji or Vanuatu (top centre). Credit: wikipedia

Kids are frequently taught that seven continents exist: Africa, Asia, Antarctica, Australia, Europe, North America, and South America.

Geologists, who look at the rocks (and tend to ignore the humans), group Europe and Asia into its own supercontinent - Eurasia - making for a total of six geologic continents.

But according to a new study of Earth's crust, there's a seventh geologic continent called 'Zealandia', and it has been hiding under our figurative noses for millennia.



Make-up of the 4.9 M km 2 continent of Zealandia in the SW Pacific

The 11 researchers behind the study argue that New Zealand and New Caledonia aren't merely an island chain.

Instead, they're both part of a single, 4.9-million-square kilometre (1.89 million-square-mile) slab of continental crust that's distinct from Australia.




"This is not a sudden discovery but a gradual realisation; as recently as 10 years ago we would not have had the accumulated data or confidence in interpretation to write this paper," they wrote in GSA Today, a Geological Society of America journal.

Ten of the researchers work for organisations or companies within the new continent; one works for a university in Australia.


Zealandia: Earth's Hidden ContinentThis City Knows  Urban Trekkers


But other geologists are almost certain to accept the research team's continent-size conclusions, says Bruce Luyendyk, a geophysicist at the University of California, Santa Barbara (he wasn't involved in the study).


"These people here are A-list earth scientists," Luyendyk tells Business Insider.


"I think they have put together a solid collection of evidence that's really thorough. I don't see that there's going to be a lot of pushback, except maybe around the edges."


Why Zealandia is almost certainly a new continent
N. Mortimer et al./GSA Today




















The concept of Zealandia isn't new. In fact, Luyendyk coined the word in 1995.

But Luyendyk says it was never intended to describe a new continent. Rather, the name was used to describe New Zealand, New Caledonia, and a collection of submerged pieces and slices of crust that broke off a region of Gondwana, a 200 million-year-old supercontinent.

"The reason I came up with this term is out of convenience," Luyendyk says.

"They're pieces of the same thing when you look at Gondwana. So I thought, 'why do you keep naming this collection of pieces as different things?'"

Researchers behind the new study took Luyendyk's idea a huge step further, re-examining known evidence under four criteria that geologists use to deem a slab of rock a continent:

Land that pokes up relatively high from the ocean floor

  • A diversity of three types of rocks: igneous (spewed by volcanoes), metamorphic (altered by heat/pressure), and sedimentary (made by erosion)
  • A thicker, less-dense section of crust compared to surrounding ocean floor
  • "Well-defined limits around a large enough area to be considered a continent rather than a microcontinent or continental fragment"

Over the past few decades, geologists had already determined that New Zealand and New Caledonia fit the bill for items 1, 2, and 3.

After all, they're large islands that poke up from the sea floor, are geologically diverse, and are made of thicker, less-dense crust.

This eventually led to Luyendyk's coining of Zealandia, and the description of the region as 'continental', since it was considered a collection of microcontinents, or bits and pieces of former continents.

The authors say the last item on the list - a question of "is it big enough and unified enough to be its own thing?" - is one that other researchers skipped over in the past, though by no fault of their own.

Journey to Zealandia, Earth's Hidden 8th Continent



At a glance, Zealandia seemed broken-up. But the new study used recent and detailed satellite-based elevation and gravity maps of the ancient seafloor to show that Zealandia is indeed part of one unified region.

The data also suggests Zealandia spans "approximately the area of greater India", or larger than Madagascar, New Guinea, Greenland, or other microcontinents and provinces.

"If the elevation of Earth's solid surface had first been mapped in the same way as those of Mars and Venus (which lack […] opaque liquid oceans)," they wrote.

"We contend that Zealandia would, much earlier, have been investigated and identified as one of Earth's continents."


The geologic devils in the details

The study's authors point out that while India is big enough to be a continent, and probably used to be, it's now part of Eurasia because it collided and stuck to that continent millions of years ago.

Zealandia, meanwhile, has not yet smashed into Australia; a piece of seafloor called the Cato Trough still separates the two continents by 25 kilometres (15.5 miles).

N. Mortimer et al./GSA Today
One thing that makes the case for Zealandia tricky is its division into northern and southern segments by two tectonic plates: the Australian Plate and the Pacific Plate.

This split makes the region seem more like a bunch of continental fragments than a unified slab.

But the researchers point out that Arabia, India, and parts of Central America have similar divisions, yet are still considered parts of larger continents.

"I'm from California, and it has a plate boundary going through it," Luyendyk says.

"In millions of years, the western part will be up near Alaska. Does that make it not part of North America? No."

What's more, the researchers wrote, rock samples suggest Zealandia is made of the same continental crust that used to be part of Gondwana, and that it migrated in ways similar to the continents Antarctica and Australia.

The samples and satellite data also show Zealandia is not broken up as a collection of microcontinents, but a unified slab.

Instead, plate tectonics has thinned, stretched, and submerged Zealandia over of millions of years. Today, only about 5 percent of it is visible as the islands of New Zealand and New Caledonia - which is part of the reason it took so long to discover.

"The scientific value of classifying Zealandia as a continent is much more than just an extra name on a list," the scientists wrote.

"That a continent can be so submerged yet unfragmented makes it a useful and thought-provoking geodynamic end member in exploring the cohesion and breakup of continental crust."

Luyendyk believes the distinction won't likely end up as a scientific curiosity, however, and speculated that it may eventually have larger consequences.

"The economic implications are clear and come into play: What's part of New Zealand and what's not part of New Zealand?" he says.

Indeed, United Nations agreements make specific mentions of continental shelves as boundaries that determine where resources can be extracted - and New Zealand may have tens of billions of dollars' worth of fossil fuels and minerals lurking off its shores.

Other articles on the same theme:



Story source: 
 
The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

Saturday, February 4, 2017

New Revolutionary mind-reading technology allows immobilized patients to communicate again

Credit: Wyss Centre
The technology to control a computer using only your thoughts has existed for decades. Yet we’ve made limited progress in using it for its original purpose: helping people with severe disabilities to communicate. Until now, that is.

The final stages of the degenerative condition known as amyotrophic lateral sclerosis (ALS) or motor neuron disease, leaves sufferers in a complete locked-in state. In the end they cannot move any part of their bodies, not even their eyes, although their brains remain unaffected.

But scientists have struggled to use brain-computer interface technology that measures electrical activity in the brain to help them communicate.

One reason for this is that it is still unclear how much these conventional brain-computer interface systems rely on electrical signals that are generated by the movement of eye muscles.

One ALS sufferer who had been using a brain-computer interface when she could still move her eyes lost her ability to communicate through the technology after becoming completely locked-in.

This suggested that most of the electrical activity recorded by the computer was related to involuntary eye movements that occurred when she thought about something rather than the thoughts themselves.


To overcome this problem, an international group of researchers used a different way of detecting neural activity that measures changes in the amount of oxygen in the brain rather than electrical signals.

A new study has shown that an alternative brain-computer interface technology can help people with 'locked-in syndrome' speak to the outside world. It has even allowed sufferers to report that they are happy, despite the condition.

The research, published in PLOS Biology, involved a technique known as functional near-infrared spectroscopy, which uses light to measure changes in blood oxygen levels.

Because the areas of the brain that are most active at any given time consume more oxygen, this means you can detect patterns of brain activity from oxygen fluctuations.

This technique is not as sensitive to muscular movements as the electroencephalography (EEG) systems used to measure electrical activity.

An EEG recording setup Credit: wikipedia
This means the new method could be used to help ALS sufferers communicate both before and after they lose their entire ability to move because it is more likely to only record brain activity related to thoughts.

The study involved four ALS sufferers, three of which had not been able to reliably communicate with their carers since 2014 (the last one since early 2015).

By using the new brain-computer interface technology, they were able to reliably communicate with their carers and families over a period of several months. This is the first time this has been possible for locked-in patients.

The volunteers were asked personal and general knowledge questions with known "yes" or "no" answers.

The brain-computer interface captured their responses correctly 70 percent of the time, which the researchers argued was enough to show they didn’t just record the right answer by chance. Similar experiments using EEG didn’t beat this chance-level threshold.

The patients were also able to communicate their feelings about their condition, and all four of them repeatedly answered "yes" when they were asked if they were happy over the course of several weeks.

One patient was even asked whether he would agree for his daughter to marry her boyfriend. Unfortunately for the couple, he said no. The volunteers have continued using the system at home after the end of the study.

As I know from my own research, working with completely locked-in patients requires a lot of hard work. In particular, you can’t know for sure if the user has understood how we want them to give an answer that we can try to detect.

If a system that has previously been used to record the brain activity of able-bodied users doesn’t work with locked-in patients, it is common to assume that the person, and not the machine, is at fault, which may not be the case.

What’s more, there is added pressure on researchers – from the patient’s family and from themselves – to fulfil the dream of finding a way to communicate with the volunteers.

These challenges highlight what a significant achievement the new study is. It is a groundbreaking piece of research that could provide a new path for developing better brain-computer interface technology.

Even though the system so far only allows locked-in patients to give yes or no answers, it already represents a big improvement in quality of life.

The first ever brain-computer interface system was designed to enable disabled (although not locked-in) users to spell words and so communicate any message they wanted, admittedly through a slow and lengthy process.

So it is safe to assume that the new technology is just the first step towards more sophisticated systems that would allow free two-way communication not based on simple questions.

Perhaps more importantly, the technology has already restored the communication capabilities of four people who had been mute for years. Imagine how these patients and their families must have felt when they were finally able to 'speak' again.


Despite the challenges in brain-computer interface research, results like this are what make us keep going.


Other articles on the same theme:






Story source:


The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

Tuesday, January 31, 2017

Researchers are close to discover the factor that determined the evolution of life on Earth

Credit: klss/Shutter Stock
Modern science has advanced significantly over the last couple of decades. We’ve managed to answer several of the world’s most long-standing questions, but some answers have continued to elude today’s scientists, including how life first emerged from Earth’s primordial soup.

However, a collaboration of physicists and biologists in Germany may have just found an explanation to how living cells first evolved.

In 1924, Russian biochemist Alexander Oparin proposed the idea that the first living cells could have evolved from liquid droplet protocells.

He believed these protocells could have acted as naturally forming, membrane-free containers that concentrated chemicals and fostered reactions.

Aleksandr Oparin (right) and Andrei Kursanov in the enzymology laboratory, 1938 Credit: wikipedia

In their hunt for the origin of life, a team of scientists from the Max Planck Institute for the Physics of Complex Systems and the Institute of Molecular Cell Biology and Genetics, both in Dresden, drew from Oparin’s hypothesis by studying the physics of 'chemically active' droplets (droplets that cycle molecules from the fluid in which they are surrounded).

Unlike a 'passive' type of droplet - like oil in water, which will just continue to grow as more oil is added to the mix - the researchers realised that chemically active droplets grow to a set size and then divide on their own accord.

This behaviour mimics the division of living cells and could, therefore, be the link between the nonliving primordial liquid soup from which life sprung and the living cells that eventually evolved to create all life on Earth.

"It makes it more plausible that there could have been a spontaneous emergence of life from nonliving soup," said Frank Jülicher, co-author of the study that appeared in the journal Nature Physics in December.

It’s an explanation of "how cells made daughters," said lead researcher David Zwicker. "This is, of course, key if you want to think about evolution."


Add a droplet of life

Some have speculated that these proto-cellular droplets might still be inside our system "like flies in life’s evolving amber".

To explore that hypothesis, the team studied the physics of centrosomes, which are organelles active in animal cell division that seem to behave like droplets.

Zwicker modelled an 'out-of-equilibrium' centrosome system that was chemically active and cycling constituent proteins continuously in and out of the surrounding liquid cytoplasm.

The proteins behave as either soluble (state A) or insoluble (state B).  An energy source can trigger a state reversal, causing the protein in state A to transform into state B by overcoming a chemical barrier. 

As long as there was an energy source, this chemical reaction could happen.

"In the context of early Earth, sunlight would be the driving force," Jülicher said.

Odarin famously believed that lighting strikes or geothermal activity on early Earth could’ve triggered these chemical reactions from the liquid protocells.

This constant chemical influx and efflux would only counterbalance itself, according to Zwicker, when a certain volume was reached by the active droplet, which would then stop growing.

Typically, the droplets could grow to about tens or hundreds of microns, according to Zwicker’s simulations. That’s about the same scale as cells.

The next step is to identify when these protocells developed the ability to transfer genetic information.

Jülicher and his colleagues believe that somewhere along the way, the cells developed membranes, perhaps from the crusts they naturally develop out of lipids that prefer to remain at the intersection of the droplet and outside liquid.

Credit: Lucy Reading-Ikkanda/Quanta Magazine
As a kind of protection for what’s within the cells, genes could’ve begun coding for these membranes. But knowing anything for sure still depends on more experiments.

So, if the very complex life on Earth could have begun from something as seemingly inconspicuous as liquid droplets, perhaps the same could be said of possible extraterrestrial life?

In any case, this research could help us understand how life as we know it started from the simplest material and how the chemical processes that made our lives possible emerged from these.

The energy and time it took for a protocell to develop into a living cell, and the living cells into more complex parts, until finally developing into an even more complex organism is baffling.

The process itself took billions of years to happen, so it’s not surprising we need some significant time to fully understand it.

Other articles on the same theme:



Story source: 
 
The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

Sunday, January 29, 2017

Earth is Flat, Vaccines are bad and Global Warming is a myth. What makes people reject scientific research?

Credit: JooJoo41/Pixabay
A lot happened in 2016, but one of the biggest cultural shifts was the rise of fake news - where claims with no evidence behind them (e.g. the world is flat) get shared as fact alongside evidence-based, peer-reviewed findings (e.g. climate change is happening).

Researchers have coined this trend the 'anti-enlightenment movement', and there's been a lot of frustration and finger-pointing over who or what's to blame. But a team of psychologists has identified some of the key factors that can cause people to reject science - and it has nothing to do with how educated or intelligent they are.

In fact, the researchers found that people who reject scientific consensus on topics such as climate change, vaccine safety, and evolution are generally just as interested in science and as well-educated as the rest of us.

City climate change Credit: NASA Climate Change

The issue is that when it comes to facts, people think more like lawyers than scientists, which means they 'cherry pick' the facts and studies that back up what they already believe to be true.

So if someone doesn't think humans are causing climate change, they will ignore the hundreds of studies that support that conclusion, but latch onto the one study they can find that casts doubt on this view. This is also known as cognitive bias. 

"We find that people will take a flight from facts to protect all kinds of belief including their religious belief, their political beliefs, and even simple personal beliefs such as whether they are good at choosing a web browser," said one of the researchers, Troy Campbell from the University of Oregon.

"People treat facts as relevant more when the facts tend to support their opinions. When the facts are against their opinions, they don't necessarily deny the facts, but they say the facts are less relevant."

This conclusion was based on a series of new interviews, as well as a meta-analysis of the research that's been published on the topic, and was presented in a symposium called over the weekend as part of the Society for Personality and Social Psychology annual convention in San Antonio.

The goal was to figure out what's going wrong with science communication in 2017, and what we can do to fix it. 

The research has yet to be published, so isn't conclusive, but the results suggest that simply focussing on the evidence and data isn't enough to change someone's mind about a particular topic, seeing as they'll most likely have their own 'facts' to fire back at you. 

"Where there is conflict over societal risks - from climate change to nuclear-power safety to impacts of gun control laws, both sides invoke the mantel of science," said one of the team, Dan Kahan from Yale University.

Instead, the researchers recommend looking into the 'roots' of people's unwillingness to accept scientific consensus, and try to find common ground to introduce new ideas.

So where is this denial of science coming from? A big part of the problem, the researchers found, is that people associate scientific conclusions with political or social affiliations.

New research conducted by Kahan showed that people have actually always cherry picked facts when it comes to science - that's nothing new. But it hasn't been such a big problem in the past, because scientific conclusions were usually agreed on by political and cultural leaders, and promoted as being in the public's best interests. 

Now, scientific facts are being wielded like weapons in a struggle for cultural supremacy, Kahan told Melissa Healy over at the LA Times, and the result is a "polluted science communication environment". 

So how can we do better? 

"Rather than taking on people's surface attitudes directly, tailor the message so that it aligns with their motivation," said Hornsey. "So with climate skeptics, for example, you find out what they can agree on and then frame climate messages to align with these."

The researchers are still gathering data for a peer-reviewed publication on their findings, but they presented their work to the scientific community for further dissemination and discussion in the meantime.

Hornsey told the LA Times that the stakes are too high to continue to ignore the 'anti-enlightenment movement'.

"Anti-vaccination movements cost lives," said Hornsey. "Climate change skepticism slows the global response to the greatest social, economic and ecological threat of our time."

"We grew up in an era when it was just presumed that reason and evidence were the ways to understand important issues; not fear, vested interests, tradition or faith," he added.

"But the rise of climate skepticism and the anti-vaccination movement made us realise that these enlightenment values are under attack."

Other articles on the same theme:






Story source: 
 
The above post is reprinted from materials provided by Sciencealert . Note: Materials may be edited for content and length.

Saturday, January 28, 2017

DOCTORS SUCCESSFULLY TREAT TWO BABIES WITH LEUKEMIA USING GENE-EDITED IMMUNE CELLS

Scientists are using gene-editing techniques to fight cancer.
IT’S A PROMISING APPROACH, BUT STILL NEEDS A LOT MORE RESEARCH

In a study out this week in the journal Science Translational Medicine, a group of British doctors reported that they had successfully “cured” two infants of the blood cancer leukemia using a treatment that involves genetically modified immune cells from a donor.

The study was incredibly small—just two babies—and the infants have only been free of leukemia for 16 and 18 months. Technically, that’s not long enough to say they are cured. Declaring someone who previously had cancer as “cured” usually doesn’t happen until that person has been free of the disease for a few years, at least. But what’s significant about this study is that it combines a promising, novel approach—CAR T cell therapy—with a relatively new gene-editing technique called TALENS, which enables the direct manipulation of genes within a person’s DNA.

In the cancer community, CAR T cell therapy is already touted as a promising immunotherapy treatment (which involves harnessing a person’s immune system to fight cancer on its own), but in preliminary trials, it’s had its limitations. Before it can become a universal cancer treatment, these kinks and logistics need to be worked out. And researchers in the field think that many of them can be solved using gene-editing techniques such as TALENS, the one used in this study, as well as CRISPR, supposedly the easiest such technique to date.


First, what is CAR T-cell treatment?

CAR T, which stands for chimeric antigen receptor T cell, is a new type of cancer treatment which is not yet publicly available, but is in active clinical trials in the United States as well as many other countries such as the United Kingdom and China. The therapy involves removing some T cells (specialized immune cells) from a patient's blood. Then those cells are genetically altered in a lab, giving them special receptors on their surface called CARs. Once the cells are ready, they are infused back into the patient’s blood, where the new (CAR) receptors seek out tumor cells, attach to them, and kill them.
CAR T-cell trials are currently in phase II clinical trials in the United States. A few drug companies, including Novartis, have plans to make the therapy available as early as this year.


How does gene-editing help?

This new treatment has worked really well for blood cancers like leukemia, especially in young children. The problem, as the researchers point out in their study, is that each set of T cells have to be custom made for each patient. That takes a lot of time, and a lot of money. Further, it’s not always feasible, or even possible, to harvest T cells from leukemia patients who simply don’t have enough healthy ones to begin with.
And that’s where gene-editing comes in. The researchers took T cells from donor recipients and made a total of four genetic changes. The two they made with TALENS enabled the T cells to become universal—allowing them to be used in any person without the risk of rejection (a phenomenon called graft-versus-host disease, where the recipient’s immune system creates such an overwhelming response to the foreign cells that the patient can die as a result). The other genetic alterations added that signature receptor to seek out and attack cancer.


What are the limitations of this study?

The two infants in the study—aged 11 and 18 months—both had an aggressive form of leukemia, and had already been subjected to other treatments like chemotherapy and stem cell transplants. And the fact that they have remained cancer free is extremely promising. But again, the study was small. Further, according to a report in MIT Technology Review, many CAR T experts argue that because the children also received other treatments simultaneously (one had a stem cell transplant soon after receiving the CAR T cells) it’s impossible to know for sure whether the CAR T cells were the sole reason the cancer cells stayed away. “There is a hint of efficacy but no proof,” Stephan Grupp, director of cancer immunotherapy at the Children’s Hospital of Philadelphia, told MIT Tech Review. “It would be great if it works, but that just hasn’t been shown yet.


What’s next?

The combination of CAR T cell immunotherapy with gene-editing remains an incredibly promising area of research. Not only to create a “universal donor” CAR T cell, but also to make the treatment more effective. Researchers at the University of Pennsylvania are currently researching using the the gene-editing technique CRISPR to edit out two genes—called checkpoint inhibitors—that prevent CAR T from working as well as it should. The trial, which could take place this year, would be the first case of a CRISPR-altered cell being used in a human patient in the United States. In November, a Chinese group tested their first CRISPR gene-edited T cells in a patient with lung cancer.
However, it’s important to remember that CAR T cell therapy is in its early stages, and CRISPR/TALEN gene edited CAR T is even newer. There’s still a lot more work to be done, including many, many more studies like this one, with a lot more patients, before it’s available for everyone.

Other articles on the same theme:



Story source:


The above post is reprinted from materials provided by Popsci . Note: Materials may be edited for content and length.

Saturday, December 10, 2016

Why we have nightmares? What produces these depraved nocturnal deliriums, and what purpose they serve?




Updated 03.12.2018

What produces these depraved nocturnal deliriums, and what purpose they serve, are questions that neuroscientists, shamans, and technicolor dreamcoat-wearers have attempted to answer since the dawn of man. And while the meanings of our nightmares may remain engulfed in shadowy mystery, we are at least beginning to understand why our hidden demons sometimes choose to visit us while we sleep.

What is a nightmare?

University of Colorado School of Medicine associate clinical professor James Pagel told IFLScience that “there’s actually a bunch of different types of frightening dreams occurring at all stages of sleep,” not all of which are classed as nightmares. Night terrors, for instance, tend to strike midway through the sleep cycle, during the deep sleep phase, and have no clear form or plot, but simply cause people to wake up with an intense and unexplainable feeling of fear.


Lakeshore Public Radio


Nightmares, on the other hand, are experienced during the rapid eye movement (REM) phase, which occurs at the end of the sleep cycle. According to Pagel, nightmares are simply “dreams with a frightening story,” and are extremely common, affecting almost everybody at some point in their lives – especially during childhood and adolescence.

According to one study, between 5 and 8 percent of adults have recurring nightmares, while between 20 and 39 percent of children under the age of 12 regularly find themselves plunged into the haunted house inside their minds after lights-out. In the majority of cases this is not a serious problem, as Pagel says that scary dreams are to be expected from time to time, particularly if we have experienced something a little unsettling during the day, like watching a horror movie.


Lakeshore Public Radio

Things can get a little problematic, however, if a person develops nightmare disorder, whereby frequent nightmares stop them from sleeping properly and start to cause them distress during waking hours.


Why do we have nightmares?

Post-traumatic stress disorder (PTSD) has been identified as a major cause of nightmare disorder, as people who have been through major traumatic experiences are often plagued by fear and anxiety even while they sleep. A recent study found that 80 percent of those who suffer from the condition report regular nightmares, while another discovered that 53 percent of Vietnam War veterans often have scary dreams, compared to just 3 percent of the general population.

UBC Wiki


Medications that disrupt the hormones and neurotransmitters that regulate REM sleep can also lead to terrifying dreams, while people with psychological disorders tend to be particularly nightmare prone as well.


How these conditions turn the slumbering brain into a ghoul-infested realm of terror is not yet fully understood, although abnormalities in neural activity have been observed in people with nightmare disorder. For example, a brain region called the amygdala, which controls fear and learning, has been found to be overactive in PTSD patients who complain of regular nightmares, while some of the brain’s emotion centers, such as the paralimbic system, also often tend to be highly active in those who experience frequent nightmares.


As scary as nightmares may seem, they can't hurt you and can actually help you understand your own mind. Kiselev Andrey Valerevich/Shutterstock
Can nightmares hurt you?

“I think nightmares are wonderful,” says Pagel enthusiastically. “Dreams are basically a cognitive feedback system on how your brain is functioning, and nightmares, more so than other dreams, give you feedback on what’s going on inside your head.” Rather than harming us, therefore, nightmares actually help us to understand our own psyche, and for this reason can actually be extremely beneficial, especially in terms of unlocking our inner creativity.

Because of this, Pagel says that “people with frequent nightmares tend to have more creative personalities, and almost all creative types report nightmares more than others do.”

However, he does warn that having too many disturbing nightmares can also play a part in causing, or at least aggravating, PTSD, which in turn massively increases the risk of a person committing suicide.


On top of this, some people may also suffer from REM behavior disorder, whereby they physically act out their dreams while they sleep. Strangely, this condition is most common among middle-aged men, and arises when a brain area called the pons – which is responsible for paralyzing our muscles while we sleep – doesn’t function properly, causing us to get up and move around. Though you don’t have to be having a nightmare for this to be dangerous, it’s not hard to imagine how dreams about running away from monsters or fighting for one’s life could place sleepers and those around them in serious peril.




Other articles on the same theme:



Story source: 
The above post is reprinted from materials provided by Ilf Science . Note: Materials may be edited for content and length.

Friday, September 30, 2016

Where the brain retains memories

For the first time in the history of research, scientists were able to identify what they think are the places where memories are formed and stored our. Briefly, neurons working with information about who we are and about the things you've done or I've lived in the past.

"Because we were able to highlight these places in laboratory mice brains, hopefully we can get to know more about how memories are formed in our brains," said the researchers behind this study

Queensland Brain Institute - University of Queensland

A team from the Institute of Neurobiology in France have made this discovery by using a fluorescent protein in neurons of four mice. This protein illuminate the cells containing calcium ions, in this case showing that the existence of the respective neuron functions.

Laboratory mice were tested so as they ran on a running wheel. During the "race" their neurons were lit, which means that in their minds already formed memories that helped them remember the distance traveled. When the animals were resting, light only appear in certain parts of the brain representing that there is stocaseră information that helped the rats to remember different parts of "race" covered.

"We managed to outline how memories are formed," explained Rosa Cossart, study leader.

Of course, the study was questioned by other researchers who claim that you can not know for sure if these neurons are indeed those responsible for storing memories. They argue that there is no reason why that experience for the running wheel mice on them to be stored in memory and divide into distinct cell blocks. However, these studies represent an interest for them, only to be taken further, detailed and analyzed in other ways.


Currently scientists clear a few aspects regarding how the storage of memories in the brain, such as that in the hippocampus there are cells that help the rats to remember the surrounding world, but they could not explain at this time how and why illuminate these neurons and that this process has traveled to the brain.









Source: Descopera

Wednesday, September 28, 2016

Ranking of countries with the healthiest people 2018

In a study published in the medical journal The Lance was revealed the top countries with the healthiest population.

The research, funded by the Bill & Melinda Gates Foundation, was driven largely by the United Nations.

,, The study started after a massive collaboration that lasted 10 years, relying on distribution of diseases globally, '' said Bloomberg. The first place was ranked Iceland and Britain was ranked fifth.

In last place, the 188 th, Central African Reblica is followed by Somalia, South Sudan, Nigeria and Cian.

,, Our analysis highlighted the importance source of income, education and fertility as indicators of improving health, '' said study author.

In the study, Romania was ranked 74. Among the most common problems facing Romanians include tuberculosis, alcoholism and smoking.

Full rankings can be viewed here

Source: IFL Science 

Saturday, September 24, 2016

The oldest civilization existing today. They were the first explorers who dared to cross the ocean 58,000 years ago

After some extensive genetic studies proved that Aboriginal Australians are the oldest civilization still exists. Australian Aborigines are the first people who settled in Australia 50,000 years ago.

The three studies, published in the journal Nature, reveal information about the origins of historical migrations of mankind.



According to the DNA results, most modern eurasists are descendants of a single wave of migrants who left Africa about 72,000 years ago. During migration, Aboriginal Australians and Papuans (ancestors of the inhabitants of Papua New Guinea) were divided into two groups and were the first people who crossed the ocean 58,000 years ago, before he gets to Australia 50,000 years ago.

,, This story lacked science, '' said researcher Eske Willerslev of the University of Copenhagen, Denmark. ,, Now we know the ancestors of the Australians were the first explorers, '' he added.



Australian and Papuan native population was divided by 37,000 years before the continental masses to separate. Australian Aborigines remained isolated until nearly 4,000 years ago, but the journey of thousands of years they have been in contact with other species homini because about 4% of their genome belongs to an unidentified species of hominids.


The Oldest Humans, Aboriginal Australians photo: Anthropology.net

In reaching this conclusion, the international team of researchers has sequenced the genomes of 25 Papuan and 83 Australian Aboriginal group in which language is used pama-nyungan, which is spoken by 90% of Australian aboriginal.

In a substudy led the Medical School at Harvard, was drawn a map of the genome originated from 300 people from 142 different populations around the world, through which they tried to discover genetic changes associated with the development features modern man caves such as painting, sculpting utensils sophisticated, but was not discovered any resemblance.

,, There is no evidence of a specific mutations that turned us into people, '' said Willerslev.

Although the two studies was suggested that there was a single wave of migration from Africa, the third paper provides evidence of the two waves of migrations from Africa.

Eske Willerslev World famous DNA scientiest and adventurer

Led by Luca Pagani, biologist and anthropologist from Tartu Estonian Biocentre in the study were discovered evidence of a huge migration 75,000 years ago, but the researcher has found evidence of a migration previous 120,000 years ago.

Dipartimento di Biologia - Unipd



Other articles on the same theme:




Story source: 


The above post is reprinted from materials provided by Science Alert . Note: Materials may be edited for content and length.