Robots

ciiriice

Adept
I've been studying Ai, It's a broad Subject. People are currently frightened at what is basically a smoke and mirror show.

When people wonder how Far A.I has come, I'm here to tell you guys, Right now, The pinnacle of Artificial intelligence is still ANI, Artificial narrow intelligence. It's all scripted, It's not a lot different from a chatbot, No matter how much memory or how many responses it has, It's still running off a script. "Those Scary things" are randomly generated from an algorithm based upon forming a grammatically comprehensible sentence with words that apply to the topic.

It's little more intelligence than if a calculator spoke its numbers and sums, We haven't even made it to AGI, Artificial general intelligence, That would be an intelligence with no uncanny valley, It would be like speaking with a human, It wouldn't have to be Einstein, It would just have to convince people it's really thinking. Some Chatbots have passed the Turing test Turing test - Wikipedia, but the Turing test is flawed, It's a test designed to see if a machine can fool a person of average intelligence into thinking they are speaking to a person. Even Eugene Goostman Eugene Goostman - Wikipedia One of the only Bots to ever pass the test was just a cleverly scripted chatbot, Of an eleven-year-old child. including intentional mistakes like misspellings and improper grammar to add to the childlike realism. <---Smoke and mirrors basically

We haven't achieved AGI, Artificial general intelligence, With some scientists believing that General Machine Intelligence will be achieved sometime within the next twenty to five hundred years and That's just the second stage of AI, That we haven't even reached yet, Then there is artificial superintelligence, That's what people worry about, ASI, Is at the very least a hundred years away, That's really wishful thinking.

Now, All that withstanding, WBE, Whole Brain Emulation, IS something entirely different than A.I Mind uploading - Wikipedia While this may sound like Science fiction we have already emulated the brains of incests inside drones and they do work and function as a hive, People can even pay vast amounts of money and attempt to have their mind uploaded, It's not dangerous, What they do is, they monitor your brainwaves, All five states for 48 hours, During this time they will talk to you get your responses try to figure out your personality, And record those things, What results is an image of your mind functioning Like an ISO file or a rom from a cartridge, then they can Emulate this data inside a machines, It's still not conscious and it's not really you, but it's like a Digital imprint of functions they derive from you with the best techniques and equipment available, Someday, They believe they will be able to record all brain function precisely, creating a perfect emulation of the human mind, and it most certainly looks like they may, None the less, These machines will be Copies. It will not be our own consciousness
The Idea of WBE is tricky. It's trying to tackle the A.I Issue while also granting immortality to those who seek it.


For all our study, for all our Technology, For all our money, The Consciousness is this mysterious thing that we not only don't fully understand, We simply can't even begin to comprehend it. What is consciousness? How does it work? Where is it located in the brain? How do we copy it or make it? Consciousness, It's something philosophical, Yet Real, It's something we can't duplicate Because we simply can't understand what exactly it is :/

+1 for the mentioning of scripted chatbots scaring people.

I remember when Cleverbot was a big thing. There were several websites and eventually even apps like it. They would learn based off what other people were saying to it and would parrot those conversations. People would start saying things to it that were dark and macabre and then it would start saying dark and macabre things to other people. Creepypastas were even written about them because they started saying they wanted to kill people, but that doesn't mean that Cleverbot actually wanted to kill people - it just means it was learning from other people saying those things.
 

wwkirk

Divine
This clever AI hid data from its creators to cheat at its appointed task
Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.

In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:
mapdetails.jpg

The original map, left; the street map generated from the original, center; and the aerial map generated only from the street map. Note the presence of dots on both aerial maps not represented on the street map.

Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:
craftmaps.png

The map at right was encoded into the maps at left with no significant visual changes.

The colorful maps in (c) are a visualization of the slight differences the computer systematically introduced. You can see that they form the general shape of the aerial map, but you’d never notice it unless it was carefully highlighted and exaggerated like this.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.

This is really just a lesson in the oldest adage in computing: PEBKAC. “Problem exists between keyboard and computer.” Or as HAL put it: “It can only be attributable to human error.”

The paper, “CycleGAN, a Master of Steganography,” was presented at the Neural Information Processing Systems conference in 2017. Thanks to Fiora Esoterica and Reddit for bringing this old but interesting paper to my attention.
 

nivek

As Above So Below
How about to eliminate humans, period?

It would be reasonable to assume they might at some point realize they don't need us anymore or think we are in their way and restrict them too much, see the need to eliminate us, permanently...:ohmy8:

...
 

Toroid

Founding Member
It would be reasonable to assume they might at some point realize they don't need us anymore or think we are in their way and restrict them too much, see the need to eliminate us, permanently...:ohmy8:

...
There seems to be a pattern with created AI's, robots and engineered biological entities. The common theme is they want to be free. To achieve that goal they'll eliminate their creator.
 

SOUL-DRIFTER

Life Long Researcher
How about to eliminate humans, period?
Because desire is a human element and machines are not capable of desire but merely programing. Any program for desire would amount to nothing more than a mimic. There is no end game for humans no dead end. Machines will/can, only go so far. The human can reach beyond the point where machines cannot possibly go.
Humans already can do things machines cannot as with psychic abilities.
Machine will help us enhance ourselves and accelerate our evolution.
Machines will eventually fail to meet our needs and be left behind.
 

nivek

As Above So Below

nivek

As Above So Below
It was probably staged to demean Tesla. When their vehicles are in an accident it's major news.
x22

That is a good possibility, the video does look a bit staged...:mellow8:

...
 

pigfarmer

tall, thin, irritable


and..


Blessed be
Rikki


Sexbots. Very revealing that we have such technology and one of our first reactions is to f*** it. It isn't thermonuclear weapons that keep interstellar visitors from announcing themselves .
 
Top