AI - Has technology evolved beyond our control?

Discussion in 'End Times & Conspiracies' started by nivek, Jun 16, 2018.

  1. nivek

    nivek As Above So Below

    Messages:
    13,781
    Rise of the Machines - Has technology evolved beyond our control?

    [​IMG]

    Across the sciences and society, in politics and education, in warfare and commerce, new technologies are not merely augmenting our abilities, they are actively shaping and directing them, for better and for worse.

    If we do not understand how complex technologies function then their potential is more easily captured by selfish elites and corporations. The results of this can be seen all around us.

    There is a causal relationship between the complex opacity of the systems we encounter every day and global issues of inequality, violence, populism and fundamentalism.

    Instead of a utopian future in which technological advancement casts a dazzling, emancipatory light on the world, we seem to be entering a new dark age characterized by ever more bizarre and unforeseen events.

    The Enlightenment ideal of distributing more information ever more widely has not led us to greater understanding and growing peace, but instead seems to be fostering social divisions, distrust, conspiracy theories and post-factual politics.

    To understand what is happening, it’s necessary to understand how our technologies have come to be, and how we have come to place so much faith in them.

    The Machines are Learning to Keep their Secrets

    Researchers at Google Brain set up three networks called Alice, Bob and Eve. Their task was to learn how to encrypt information. Alice and Bob both knew a number – a key, in cryptographic terms – that was unknown to Eve. Alice would perform some operation on a string of text, and then send it to Bob and Eve.

    If Bob could decode the message, Alice’s score increased; but if Eve could, Alice’s score decreased. Over thousands of iterations, Alice and Bob learned to communicate without Eve breaking their code: they developed a private form of encryption like that used in private emails today. But crucially, we don’t understand how this encryption works. Its operation is occluded by the deep layers of the network. What is hidden from Eve is also hidden from us.

    Google Translate was known for its humorous errors, but in 2016, the system started using a neural network developed by Google Brain, and its abilities improved exponentially. Rather than simply cross-referencing heaps of texts, the network builds its own model of the world, and the result is not a set of two-dimensional connections between words, but a map of the entire territory. In this new architecture, words are encoded by their distance from one another in a mesh of meaning – a mesh only a computer could comprehend.

    While a human can draw a line between the words “tank” and “water” easily enough, it quickly becomes impossible to draw on a single map the lines between “tank” and “revolution”, between “water” and “liquidity”, and all of the emotions and inferences that cascade from those connections. The map is thus multidimensional, extending in more directions than the human mind can hold. As one Google engineer commented, when pursued by a journalist for an image of such a system: “I do not generally like trying to visualise thousand-dimensional vectors in three-dimensional space.” This is the unseeable space in which machine learning makes its meaning. Beyond that which we are incapable of visualizing is that which we are incapable of even understanding.

    AlphaGO

    By the time the Google Brain–powered AlphaGo software took on the Korean professional Go player Lee Sedol in 2016, something had changed. In the second of five games, AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator. “I thought it was a mistake,” said another. Fan Hui, a seasoned Go player who had been the first professional to lose to the machine six months earlier, said: “It’s not a human move. I’ve never seen a human play this move.”

    AlphaGo went on to win the game, and the series. AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them.

    The question then becomes, what would a rogue algorithm or a flash crash look like in the wider reality?

    Would it look, for example, like Mirai, a piece of software that brought down large portions of the internet for several hours on 21 October 2016?

    When researchers dug into Mirai, they discovered it targets poorly secured internet connected devices – from security cameras to digital video recorders – and turns them into an army of bots. In just a few weeks, Mirai infected half a million devices, and it needed just 10% of that capacity to cripple major networks for hours.

    Mirai, in fact, looks like nothing so much as Stuxnet, another virus discovered within the industrial control systems of hydroelectric plants and factory assembly lines in 2010. Stuxnet was a military-grade cyberweapon; when dissected, it was found to be aimed specifically at Siemens centrifuges, and designed to go off when it encountered a facility that possessed a particular number of such machines. That number corresponded with one particular facility: the Natanz nuclear facility in Iran. When activated, the program would quietly degrade crucial components of the centrifuges, causing them to break down and disrupt the Iranian enrichment programme. The attack was apparently partially successful, but the effect on other infected facilities is unknown.


    To this day, despite obvious suspicions, nobody knows where Stuxnet came from, or who made it.

    Nobody knows for certain who developed Mirai, either, or where its next iteration might come from, but it might be there, right now, breeding in the CCTV camera in your office, or the wifi-enabled kettle in the corner of your kitchen.

    How we understand and think of our place in the world, and our relation to one another and to machines, will ultimately decide where our technologies will take us.

    .
     
    • Like Like x 4
  2. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    arghhh so much people worring about A.I, nevermind the fact sentient A.I is impossible in my opinion
     
  3. nivek

    nivek As Above So Below

    Messages:
    13,781
    In your opinion...Besides, if you read the article you would understand the point being made, its not about sentient AI...Machines are already doing things on their own and hiding things from us and developing in ways we cannot understand and they are not even sentient AI, they do not need to be...

    Machines do not need to be sentient to harm humanity...

    ...
     
    • Like Like x 1
  4. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    but would'nt them be extremelly vulnerable to EMP?
     
  5. nivek

    nivek As Above So Below

    Messages:
    13,781
    Yes but some of our technology is shielded from that type of attack...Not all but some is...

    ...
     
  6. nivek

    nivek As Above So Below

    Messages:
    13,781
    If it's software that can traverse the internet then we have a big problem...

    ...
     
  7. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    i still think its very unlikely
    and if it ever happens, i will scream right away "false flag!" because thats what it obviously is
     
  8. nivek

    nivek As Above So Below

    Messages:
    13,781
    This has some serious implications, its a significant step forward in machine learning...

    A machine has figured out Rubik’s Cube all by itself

    Unlike chess moves, changes to a Rubik’s Cube are hard to evaluate, which is why deep-learning machines haven’t been able to solve the puzzle on their own.

    Until now.

    Yet another bastion of human skill and intelligence has fallen to the onslaught of the machines. A new kind of deep-learning machine has taught itself to solve a Rubik’s Cube without any human assistance.

    The milestone is significant because the new approach tackles an important problem in computer science—how to solve complex problems when help is minimal.

    [​IMG]

    Enter Stephen McAleer and colleagues from the University of California, Irvine. These guys have pioneered a new kind of deep-learning technique, called “autodidactic iteration,” that can teach itself to solve a Rubik’s Cube with no human assistance. The trick that McAleer and co have mastered is to find a way for the machine to create its own system of rewards.

    Here’s how it works.

    Given an unsolved cube, the machine must decide whether a specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move.

    Autodidactic iteration does this by starting with the finished cube and working backwards to find a configuration that is similar to the proposed move. This process is not perfect, but deep learning helps the system figure out which moves are generally better than others.

    Having been trained, the network then uses a standard search tree to hunt for suggested moves for each configuration.

    The result is an algorithm that performs remarkably well. “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves—less than or equal to solvers that employ human domain knowledge,” say McAleer and co.

    That’s interesting because it has implications for a variety of other tasks that deep learning has struggled with, including puzzles like Sokoban, games like Montezuma’s Revenge, and problems like prime number factorization.

    Indeed, McAleer and co have other goals in their sights: “We are working on extending this method to find approximate solutions to other combinatorial optimization problems such as prediction of protein tertiary structure.”

    .
     
    • Like Like x 1
    • Awesome Awesome x 1
  9. Gambeir

    Gambeir Celestial

    Messages:
    1,027
    Ya know I have no idea how humans expect to not go extinct. No frigging idea at all.
    I was looking for a place to put this information but here is as good as any I guess.

    7 times technology almost ended the world.jpg
     
    • Like Like x 1
    • Awesome Awesome x 1
  10. Gambeir

    Gambeir Celestial

    Messages:
    1,027
    Umm... a possible warning though I'm quite sure he was talking about cell phones.

    [​IMG]
     
    • Like Like x 1
  11. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    wow this infographic is crazy!
    i have no idea how i dint know about the june 1992 one!
    though i still think A.I apocalypse is almost impossible
     
    • Agree Agree x 1
  12. nivek

    nivek As Above So Below

    Messages:
    13,781
    Yeah that is a scary one, if these occurred who is to say, there could have been a few other near misses we don't know about...I try not to think too deep into that topic, not healthy use of the mind, it's a self induced mind f#@%...lol

    ...
     
    • Agree Agree x 1
    • Awesome Awesome x 1
  13. David Grey

    David Grey Silence Speaks Volumes

    Messages:
    4
    • Like Like x 3
    • Thanks Thanks x 1
  14. nivek

    nivek As Above So Below

    Messages:
    13,781
    Hello and welcome to AE....q37

    You're fine with the links, no worries...

    ...
     
    • Like Like x 1
    • Thanks Thanks x 1
  15. David Grey

    David Grey Silence Speaks Volumes

    Messages:
    4
    Thank you :)
     
    • Thanks Thanks x 1
  16. Gambeir

    Gambeir Celestial

    Messages:
    1,027
    Ya, welcome aboard~!

    I was just looking at some stuff about remote viewing of a supposed haunting about the Bell Witch Project.
    Bell Witch - Wikipedia

    There's many stories about the Bell witch but I think these remote viewing ones are interesting because they aren't describing a spirit as much as they seem to be describing something like AI. Now considering how close we have come to destroying the planet with inexplicable turns of events, I find this idea that there's more than meets the eye with AI especially worth noticing. I've included a couple extracts but check out the full story at the link. I haven't seen anyone else do any writing along these lines when it comes to investigating historical events like these.


    When the Poltergeist finds it's voice.
    By Tim R. Swartz
    A poltergeist distinguishes itself from traditional ghosts and hauntings. Could a poltergeist be something entirely different? When The Poltergeist Finds Its Voice

    A couple of amusing out takes from the link above.

    The Shawville Poltergeist;
    "When a poltergeist does find its voice it seems to take great delight in spinning wild tales of its identity and origin. It may at one time say it is the ghost of someone who died years before, only to change its tune later and profess to be the devil or a demon. Like the Bell Witch, the Shawville poltergeist (also known as the Dagg poltergeist), enjoyed entertaining visitors by telling obscene stories and conversely, singing hymns in an “angelic voice.”

    "Both the Bell Witch and the Shawville poltergeist exhibit almost identical personality traits. Both were fond of using obscene language and taking on the roles of different characters. Both entities were never shy about talking for hours in front of multiple witnesses. In fact, they seemed to thrive on the attention. They also claimed the ability to travel instantaneously to far off locations, bringing back information that could be verified later."

    Could The Poltergeist Be An Artificial Intelligence?

    "~four professional remote viewers that have set out to share their project findings regarding socially significant, anomalous target sets.”

    "Jeff Coley writes that the team’s result of their remote viewing attempt came up with the concept of “Something contained, or restrained inside an enclosure. Often this container was sketched and described to be like a bottle, while at other times as a box of some kind, which acted as an enclosure or a tomb. One viewer’s session described this object as an ossuary, similar to what a collector of antique relics might possess within their private collection. Other sessions described what looked suspiciously similar to the idea of a Genie bottle.”

    "According to Coley something had been contained inside a bottle or box. The viewers described it as a phantom, and intelligence and a thought form. The remote viewing work describes the purpose of this thing as having to do with amusement, recreation, performance, and the idea of sending a message. The viewers also described that the phenomenon was associated with something destructive in nature. One viewer notes that it is like a parasite or a time-bomb that somehow escaped or was accidentally released."
     
    Last edited: Jun 20, 2018
  17. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    hello!
    also i am pretty sure that has already been tried
     
  18. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    that sounds less like polteirgeists and more like john keel multispectral entities, lol!
     
  19. humanoidlord

    humanoidlord ce3 researcher

    Messages:
    4,297
    huh go figure, i am reading "operation trojan horse" and keel mentions the bell witch case in his book!
     
  20. Black Angus

    Black Angus Honorable

    Messages:
    292
    Actually Synthetic intellect is less vulnerable to environmental dangers than biological ones.

    The memory system's use Raid and parity checking

    RAID - Wikipedia

    Combine RAID with the internets and they are far more secure than biological memory.

    In fact we are workjing on Brain prothesis devices to do this for ourselves.

    The Neuroscientist Who's Building a Better Memory for Humans

    Last month, researchers created an electronic link between the brains of two rats separated by thousands of miles. This was just another reminder that technology will one day make us telepaths. But how far will this transformation go? And how long will it take before humans evolve into a fully-fledged hive mind? We spoke to the experts to find out.
    I spoke to three different experts, all of whom have given this subject considerable thought: Kevin Warwick, a British scientist and professor of cybernetics at the University of Reading; Ramez Naam, an American futurist and author of NEXUS (a scifi novel addressing this topic); and Anders Sandberg, a Swedish neuroscientist from the Future of Humanity Institute at the University of Oxford.
    They all told me that the possibility of a telepathic noosphere is very real — and it’s closer to reality than we might think. And not surprisingly, this would change the very fabric of the human condition.

    https://io9.gizmodo.com/how-much-longer-until-humanity-becomes-a-hive-mind-453848055?IR=T
     
    • Thanks Thanks x 2
    • Like Like x 1

Share This Page