Showing posts with label artificial intelligence techniques. Show all posts
Showing posts with label artificial intelligence techniques. Show all posts

Sunday, January 29, 2017

Artificial Intelligence Used to ID Skin Cancer. Deep learning algorithm does as well as dermatologists in identifying skin cancer

A dermatologist using a dermatoscope, a type of handheld microscope, to look at skin. Computer scientists at Stanford have created an artificially intelligent diagnosis algorithm for skin cancer that matched the performance of board-certified dermatologists. Credit: Matt Young
It's scary enough making a doctor's appointment to see if a strange mole could be cancerous. Imagine, then, that you were in that situation while also living far away from the nearest doctor, unable to take time off work and unsure you had the money to cover the cost of the visit. In a scenario like this, an option to receive a diagnosis through your smartphone could be lifesaving.

Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They made a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.

"We realized it was feasible, not just to do something well, but as well as a human dermatologist," said Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory. "That's when our thinking changed. That's when we said, 'Look, this is not just a class project for students, this is an opportunity to do something great for humanity.'"

The final product, the subject of a paper in the Jan. 25 issue of Nature, was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.

Why skin cancer

Every year there are about 5.4 million new cases of skin cancer in the United States, and while the five-year survival rate for melanoma detected in its earliest states is around 97 percent, that drops to approximately 14 percent if it's detected in its latest stages. Early detection could likely have an enormous impact on skin cancer outcomes.

Diagnosing skin cancer begins with a visual examination. A dermatologist usually looks at the suspicious lesion with the naked eye and with the aid of a dermatoscope, which is a handheld microscope that provides low-level magnification of the skin. If these methods are inconclusive or lead the dermatologist to believe the lesion is cancerous, a biopsy is the next step.

Bringing this algorithm into the examination process follows a trend in computing that combines visual processing with deep learning, a type of artificial intelligence modeled after neural networks in the brain. Deep learning has a decades-long history in computer science but it only recently has been applied to visual processing tasks, with great success. The essence of machine learning, including deep learning, is that a computer is trained to figure out a problem rather than having the answers programmed into it.

"We made a very powerful machine learning algorithm that learns from data," said Andre Esteva, co-lead author of the paper and a graduate student in the Thrun lab. "Instead of writing into computer code exactly what to look for, you let the algorithm figure it out."

The algorithm was fed each image as raw pixels with an associated disease label. Compared to other methods for training algorithms, this one requires very little processing or sorting of the images prior to classification, allowing the algorithm to work off a wider variety of data.

From cats and dogs to melanomas and carcinomas

Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.

"There's no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own," said Brett Kuprel, co-lead author of the paper and a graduate student in the Thrun lab. "We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy -- the labels alone were in several languages, including German, Arabic and Latin."

After going through the necessary translations, the researchers collaborated with dermatologists at Stanford Medicine, as well as Helen M. Blau, professor of microbiology and immunology at Stanford and co-author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions representing over 2,000 different diseases.

During testing, the researchers used only high-quality, biopsy-confirmed images provided by the University of Edinburgh and the International Skin Imaging Collaboration Project that represented the most common and deadliest skin cancers -- malignant carcinomas and malignant melanomas. The 21 dermatologists were asked whether, based on each image, they would proceed with biopsy or treatment, or reassure the patient. The researchers evaluated success by how well the dermatologists were able to correctly diagnose both cancerous and non-cancerous lesions in over 370 images.

The algorithm's performance was measured through the creation of a sensitivity-specificity curve, where sensitivity represented its ability to correctly identify malignant lesions and specificity represented its ability to correctly identify benign lesions. It was assessed through three key diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In all three tasks, the algorithm matched the performance of the dermatologists with the area under the sensitivity-specificity curve amounting to at least 91 percent of the total area of the graph.

An added advantage of the algorithm is that, unlike a person, the algorithm can be made more or less sensitive, allowing the researchers to tune its response depending on what they want it to assess. This ability to alter the sensitivity hints at the depth and complexity of this algorithm. The underlying architecture of seemingly irrelevant photos -- including cats and dogs -- helps it better evaluate the skin lesion images.

Health care by smartphone

Although this algorithm currently exists on a computer, the team would like to make it smartphone compatible in the near future, bringing reliable skin cancer diagnoses to our fingertips.

"My main eureka moment was when I realized just how ubiquitous smartphones will be," said Esteva. "Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?"

The team believes it will be relatively easy to transition the algorithm to mobile devices but there still needs to be further testing in a real-world clinical setting.

"Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients," said Susan Swetter, professor of dermatology and director of the Pigmented Lesion and Melanoma Program at the Stanford Cancer Institute, and co-author of the paper. "However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike."

Even in light of the challenges ahead, the researchers are hopeful that deep learning could someday contribute to visual diagnosis in many medical fields.

Other articles on the same theme:




Story source: 
The above post is reprinted from materials provided by Sciencedaily . Note: Materials may be edited for content and length.

Wednesday, June 29, 2016

Cognitive Computing New generation of robots are able solve moral dilemmas (following Code of Ethics)





















Updated 05/05/2020

Cognitive robotic process automation market is quickly becoming because of expanding appropriation of advanced workforce crosswise over various ventures. Psychological advancements coordinated inside mechanical procedure mechanization computerizes complex dreary errands by empowering basic leadership capacities.

Move Over Robotic Process Automation, Cognitive Computing is Here YourStory


Robots have been used for decades to automate specific processes. Vehicle assembly lines where robots replaced humans in performing monotonous, repetitive tasks is the best-known example of robotic process automation.

Request a Sample Cognitive Robotic Process Automation Market Research Report Bandera County Courier


The Cognitive Robotic Process Automation Market is expected to reach +55% CAGR during forecast period 2020-2027. bccourier

----------------------------------------------------------------------------------------------------------------------
Scientists have developed a model that could give computers the ability to reason more like humans and even make moral decisions.


Northwestern University's Ken Forbus is closing the gap between humans and machines.

Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve moral dilemmas.

The view of knowledge behind Beyond this Brief Anomaly 


"In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."

The theory underlying the model is psychologist Dedre Gentner's structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychology phenomena. Structure-mapping argues that analogy and similarity involve comparisons between relational representations, which connect entities and ideas, for example, that a clock is above a door or that pressure differences cause water to flow.

Analogies can be complex (electricity flows like water) or simple (his new cell phone is very similar to his old phone). Previous models of analogy, including prior versions of SME, have not been able to scale to the size of representations that people tend to use. Forbus's new version of SME can handle the size and complexity of relational representations that are needed for visual reasoning, cracking textbook problems, and solving moral dilemmas.

"Relational ability is the key to higher-order cognition," said Gentner, Alice Gabrielle Twight Professor in Northwestern's Weinberg College of Arts and Sciences. "Although we share this ability with a few other species, humans greatly exceed other species in ability to represent and reason with relations."

Supported by the Office of Naval Research, Defense Advanced Research Projects Agency (DARPA) , and Air Force Office of Scientific Research, Forbus and Gentner's research is described in the June 20 issue of the journal Cognitive Science. Andrew Lovett, a postdoctoral fellow in Gentner's laboratory, and Ronald Ferguson, a PhD graduate from Forbus's laboratory, also authored the paper.


Many artificial intelligence systems -- like Google's AlphaGo -- rely on deep learning, a process in which a computer learns examining massive amounts of data. By contrast, people -- and SME-based systems -- often learn successfully from far fewer examples. In moral decision-making, for example, a handful of stories suffices to enable an SME-based system to learn to make decisions as people do in psychological experiments.

"Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly," Forbus said.

Teaching Robots about the World using 'Robo Brain' AZoRobotics


SME has also been used to learn to solve physics problems from the Advanced Placement test, with a program being trained and tested by the Educational Testing Service. As further demonstration of the flexibility of SME, it also has been used to model multiple visual problem-solving tasks.

To encourage research on analogy, Forbus's team is releasing the SME source code and a 5,000-example corpus, which includes comparisons drawn from visual problem solving, textbook problem solving, and moral decision making.

The range of tasks successfully tackled by SME-based systems suggests that analogy might lead to a new technology for artificial intelligence systems as well as a deeper understanding of human cognition. For example, using analogy to build models by refining stories from multiple cultures that encode their moral beliefs could provide new tools for social science. Analogy-based artificial intelligence techniques could be valuable across a range of applications, including security, health care, and education.


"SME is already being used in educational software, providing feedback to students by comparing their work with a teacher's solution," Forbus said. But there is a vast untapped potential for building software tutors that use analogy to help students learn."

You may also like:  Advanced prosthetic robot arm the free solution for all war veterans.













The above post is reprinted from materials provided by Northwestern University. Note: Materials may be edited for content and length.