Robots

Discussion in 'Science, Tech, & Space Exploration' started by Toroid, Jun 30, 2018.

  1. wwkirk

    wwkirk Celestial

    Messages:
    1,245
    • Like Like x 2
  2. ciiriice

    ciiriice Adept

    Messages:
    13
    +1 for the mentioning of scripted chatbots scaring people.

    I remember when Cleverbot was a big thing. There were several websites and eventually even apps like it. They would learn based off what other people were saying to it and would parrot those conversations. People would start saying things to it that were dark and macabre and then it would start saying dark and macabre things to other people. Creepypastas were even written about them because they started saying they wanted to kill people, but that doesn't mean that Cleverbot actually wanted to kill people - it just means it was learning from other people saying those things.
     
    • Like Like x 2
  3. wwkirk

    wwkirk Celestial

    Messages:
    1,245
     
    • Awesome Awesome x 1
  4. wwkirk

    wwkirk Celestial

    Messages:
    1,245
    • Like Like x 1
    • Thanks Thanks x 1
  5. wwkirk

    wwkirk Celestial

    Messages:
    1,245
    This clever AI hid data from its creators to cheat at its appointed task
    Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!

    This occurrence reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do.

    The intention of the researchers was, as you might guess, to accelerate and improve the process of turning satellite imagery into Google’s famously accurate maps. To that end the team was working with what’s called a CycleGAN — a neural network that learns to transform images of type X and Y into one another, as efficiently yet accurately as possible, through a great deal of experimentation.

    In some early results, the agent was doing well — suspiciously well. What tipped the team off was that, when the agent reconstructed aerial photographs from its street maps, there were lots of details that didn’t seem to be on the latter at all. For instance, skylights on a roof that were eliminated in the process of creating the street map would magically reappear when they asked the agent to do the reverse process:
    [​IMG]
    The original map, left; the street map generated from the original, center; and the aerial map generated only from the street map. Note the presence of dots on both aerial maps not represented on the street map.

    Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

    The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

    So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

    In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed:
    [​IMG]
    The map at right was encoded into the maps at left with no significant visual changes.

    The colorful maps in (c) are a visualization of the slight differences the computer systematically introduced. You can see that they form the general shape of the aerial map, but you’d never notice it unless it was carefully highlighted and exaggerated like this.

    This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

    One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

    As always, computers do exactly what they are asked, so you have to be very specific in what you ask them. In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network — that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.

    This is really just a lesson in the oldest adage in computing: PEBKAC. “Problem exists between keyboard and computer.” Or as HAL put it: “It can only be attributable to human error.”

    The paper, “CycleGAN, a Master of Steganography,” was presented at the Neural Information Processing Systems conference in 2017. Thanks to Fiora Esoterica and Reddit for bringing this old but interesting paper to my attention.
     
    • Like Like x 2
    • Awesome Awesome x 1
  6. SOUL-DRIFTER

    SOUL-DRIFTER Life Long Researcher

    Messages:
    1,566
    Eventually computers will program computers to eliminate the Human error element.
     
    • Like Like x 1
    • Agree Agree x 1
  7. wwkirk

    wwkirk Celestial

    Messages:
    1,245
    How about to eliminate humans, period?
     
    • Agree Agree x 1
  8. Rikki

    Rikki High Priestess

    Messages:
    241


    and..


    Blessed be
    Rikki
     
    • Like Like x 2
  9. nivek

    nivek As Above So Below

    Messages:
    12,139
    It would be reasonable to assume they might at some point realize they don't need us anymore or think we are in their way and restrict them too much, see the need to eliminate us, permanently...:ohmy8:

    ...
     
    • Like Like x 1
  10. Kchoo

    Kchoo Terrestrial

    Messages:
    1,888
     
    • Like Like x 2
  11. wwkirk

    wwkirk Celestial

    Messages:
    1,245
    The Matrix applies, too. It's a common theme in science fiction.

    But that doesn't mean it can't happen in reality.
     
    • Like Like x 3
  12. Toroid

    Toroid Founding Member

    Messages:
    4,756
    There seems to be a pattern with created AI's, robots and engineered biological entities. The common theme is they want to be free. To achieve that goal they'll eliminate their creator.
     
    • Thanks Thanks x 2
  13. SOUL-DRIFTER

    SOUL-DRIFTER Life Long Researcher

    Messages:
    1,566
    Because desire is a human element and machines are not capable of desire but merely programing. Any program for desire would amount to nothing more than a mimic. There is no end game for humans no dead end. Machines will/can, only go so far. The human can reach beyond the point where machines cannot possibly go.
    Humans already can do things machines cannot as with psychic abilities.
    Machine will help us enhance ourselves and accelerate our evolution.
    Machines will eventually fail to meet our needs and be left behind.
     
  14. nivek

    nivek As Above So Below

    Messages:
    12,139
    • Like Like x 1
  15. Toroid

    Toroid Founding Member

    Messages:
    4,756
    • Agree Agree x 1
  16. SOUL-DRIFTER

    SOUL-DRIFTER Life Long Researcher

    Messages:
    1,566
    I guess the sight of that Tesla car was too much for that robot.
    It keels over in a faint.:laugh8::laugh8:
     
    • Like Like x 2
  17. nivek

    nivek As Above So Below

    Messages:
    12,139
    Yeah and humans rush to help it lol...

    ...
     
  18. nivek

    nivek As Above So Below

    Messages:
    12,139
    That is a good possibility, the video does look a bit staged...:mellow8:

    ...
     
    • Agree Agree x 2
  19. Toroid

    Toroid Founding Member

    Messages:
    4,756
    Maybe it as a programmed protocol for tip-overs. They should wear a Life Alert button around their necks.
     
    • Like Like x 1
  20. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,106
  21. pigfarmer

    pigfarmer tall, thin, irritable

    Messages:
    1,356
    Sexbots. Very revealing that we have such technology and one of our first reactions is to f*** it. It isn't thermonuclear weapons that keep interstellar visitors from announcing themselves .
     

Share This Page