Would you vote for a self aware android?

Shadowprophet

Truthiness
Well, there is a lot to say about Artificial intelligence. The term AI is thrown around all willy-nilly these days and people begin to wonder if their Android phone is sentient.

Artificial intelligence comes in a verity of stages. We use Artificial intelligence in a verity of ways.
There Are Several Stages of AI. There are AI Rudimentary script programs designed to react a verity of way depending on how we interact with them, Like Siri or Google home.

Then There is AGI, Artificial General intelligence. This is a machine Capable of thinking and reacting and emulating human intelligence. We have made some progress in this field. In fact, there is a test To measure just how human-like an AGI can be, This test is called the Turing test. Turing test - Wikipedia

Here is an article about an A.I named Eugene Gootsman who was an A.I programmed to emulate as best as it could a 13 yo Sweden boy. Computer convinces panel it is human
AGI isn't perfect. It's very intricate and even so It's still not a Sentient consciousness. It' just a very very complicated program based on the principals of A.I which means, it's not Conscious It's Scripted.

Then There Is Artificial Super Intelligence, Or ASI, A Term pertaining to an event when AI or AGI suppresses Human intelligence. This is something that hasn't transpired yet. But many expect will someday.

In not one of these cases, Was the A.I self-aware. The Idea that a Machine can be conscious is a fantastic concept, And To be honest, Yes, I believe it's possible someday somehow a machine could become sentient. The problem is, We are a very long way away from Sentient machines. Machines that can truly think for themselves with a consciousness.

It honestly, May never happen. But I'm willing to bet it will someday. Things Like Whole Brain Emulation are already being researched and we have even been able to emulate simple brains inside machines already, like that of bees or other insects in drones. Connecting the Brain to Itself through an Emulation

Anyway, This isn't my favorite topic. I actually find A.I Not Sciency enough for me. I'd rather be thinking physics, but that's just me.
 

nivek

As Above So Below

wwkirk

Divine
Existential risk from artificial general intelligence
Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe. For instance, the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

The likelihood of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk.

One source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. For example, in one scenario, the first-generation computer program found able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months of massively parallel processing time. The second-generation program is expected to take three months to perform a similar chunk of work, on average; in practice, doubling its own capabilities may take longer if it experiences a mini-"AI winter", or may be quicker if it undergoes a miniature "AI Spring" where ideas from the previous generation are especially easy to mutate into the next generation. In this scenario the system undergoes an unprecedently large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas. More broadly, examples like arithmetic and Go show that progress from human-level AI to superhuman ability is sometimes extremely rapid.

A second source of concern is that controlling a superintelligent machine (or even instilling it with human-compatible values) may be an even harder problem than naïvely supposed. Some AGI researchers believe that a superintelligence would naturally resist attempts to shut it off, and that preprogramming a superintelligence with complicated human values may be an extremely difficult technical task.
 

wwkirk

Divine
AI experts list the real dangers of artificial intelligence

A 100-page report written by artificial intelligence experts from industry and academia has a clear message: Every AI advance by the good guys is an advance for the bad guys, too.

The paper, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” calls this the “dual-use” attribute of AI, meaning the technology’s ability to make thousands of complex decisions every second could be used to both help or harm people, depending on the person designing the system. The experts considered the malicious uses of AI that either currently exists or could be developed over the next five years, and broke them out into three groups: digital, physical, and political.

Here is a selected list of the potential harms discussed:

Digital:
  • Automated phishing, or creating fake emails, websites, and links to steal information.
  • Faster hacking, through the automated discovery of vulnerabilities in software.
  • Fooling AI systems, by taking advantage of the flaws in how AI sees the world.
Physical:
  • Automating terrorism, by using commercial drones or autonomous vehicles as weapons.
  • Robot swarms, enabled by many autonomous robots trying to achieve the same goal.
  • Remote attacks, since autonomous robots wouldn’t need to be controlled within any set distance.
Political:
  • Propaganda, through easily-generated fake images and video.
  • Automatic dissent removal, by automatically finding and removing text or images.
  • Personalized persuasion, taking advantage of publicly-available information to target someone’s opinions.
The report paints a bleak picture of our potential future, especially since the timeframe is a mere five years. But it doesn’t mean we’re resigned to dystopia. Researchers are already working on some potential solutions to these problems, though they warn it will likely be a cat-and-mouse game.
 
Top