Drone

 

    The findings are concerning, considering deep learning could soon be used to control deadly military weapons, and cars (stock image)

Scientists admit that computers are learning too quickly for humans to keep up.


From driving cars to beating chess masters at their own game, computers are already performing incredible feats.
And artificial intelligence is quickly advancing, allowing computers to learn from experience without the need for human input.
But scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether.

Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over - a prospect that could soon become a reality
Scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether. Pictured is the Terminator film, in which robots take over - a prospect that could soon become a reality. dailymail

Last year, a driverless car took to the streets of New Jersey, which ran without any human intervention.
The car, created by Nvidia, could make its own decisions after watching how humans learned how to drive.
But despite creating the car, Nvidia admitted that it wasn't sure how the car was able to learn in this way, according to MIT Technology Review.

The car's underlying technology was 'deep learning' – a powerful tool based on the neural layout of the human brain.
Deep learning is used in a range of technologies, including tagging your friends on social media, and allowing Siri to answer questions.

The system is also being used by the military, which hopes to use deep learning to steer ships, destroy targets and control deadly drones.
There is also hope that deep learning could be used in medicine to diagnose rare diseases.
But if its creators lose control of the system, we're in big trouble, experts claim.

The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network. Adam Ferriss

Speaking to MIT Technology Review, Professor Tommi Jaakkola, who works on applications of deep learning, said: 'If you had a very small neural network [deep learning algorithm], you might be able to understand it.'
'But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.'
This is concerning, considering deep learning could soon be used to control deadly military weapons, and cars.
In a recent study, a computer was tasked with predicting disease by analysing patient records.

Results showed that the computer was extremely accurate in diagnosing schizophrenia – but even its creators did not know why.
Dr Joel Dudley, who lead the project at New York's Mount Sinai Hospital, said: 'We can build these models, but we don't know how they work.'
In the hopes of staying in control of these powerful systems, many of the world's largest technology firms created an 'AI ethics board' in 2016.
Researchers with Alphabet, Amazon, Facebook, IBM, and Microsoft teamed up to create the new group, known as the Partnership on Artificial Intelligence to Benefit People and Society, to develop a standard of ethics for the development of AI. 

THE EIGHT RULES OF THE PARTNERSHIP ON AI 

1. We will seek to ensure that AI technologies benefit and empower as many people as possible.
2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
Researchers with Alphabet, Amazon, Facebook, IBM, and Microsoft have teamed up to create a new group, known as the Partnership on Artificial Intelligence to Benefit People and Society, to develop a standard of ethics for the development of AI. However, there are some names missing - including Apple.
Researchers with Alphabet, Amazon, Facebook, IBM, and Microsoft have teamed up to create a new group, known as the Partnership on Artificial Intelligence to Benefit People and Society, to develop a standard of ethics for the development of AI. However, there are some names missing - including Apple. dailymail

6. We will work to maximize the benefits and address the potential challenges of AI technologies, by:
     Working to protect the privacy and security of individuals.
     Striving to understand and respect the interests of all parties that may be impacted by AI advances.
     Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.
     Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
    Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
7. We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
8. We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals. dailymail



Reactions:

Post a Comment Blogger Disqus

 
 
Top