[ cyb / tech / λ / layer ] [ zzz / drg / lit / diy / art ] [ w / rpg / r ] [ q ] [ / ] [ popular / ???? / rules / radio / $$ / news ] [ volafile / uboa / sushi / LainTV / lewd ]

cyb - cyberpunk

“There will come a time when it isn't "They're spying on me through my phone", anymore. Eventually, it will be, "My phone is spying on me.””
Name
Email
Subject
Comment
File
Password (For file deletion.)

BUY LAINCHAN STICKERS HERE

STREAM » LainTV « STREAM

[Return][Go to bottom]

File: 1446164312996.jpg (74.3 KB, 640x427, machine.jpg) ImgOps Exif iqdb

 No.18458

A brain makes more operations than a current machine can compute.

A brain also does a lot of stuff other than developing abstract thoughts (controlling vital functions, handling sensorial data, etc).

Maybe a simplified version of a brain capable of having thoughts could actually be simulated in a current computer.

Maybe if the meaning of "consciousness" is formalized, an even simpler implementation can be made, which would be equivalent to a brain but extremely more efficient.

Maybe current machines can't simulate a brain in real time, but they actually can simulate it in slow motion (1 second of thought per week of computation).

Maybe you can map the evolution of another non-linear system into a thought (the motion of water in a very specific container, the evolution of the large scale structure of the universe, the phase of photons coming from a wall).

AI is the god we are looking for. It is the final answer.

Share your ideas. Maybe try something. Who says we can't get somewhere. Most of the potential in ourselves and in the things around us goes completely unexplored.
>>

 No.18469

mystic indian man says "be the change you want to see in the world"

as for the discussion piece, what exactly is it in the brain you are trying to recreate. Logic, consciousness, maybe you think they are dependent on eachother? And what creates consciousness, is it complexity, is it structure, is there something we arent considering?

lots of questions need to be answered. The fact is though that in many of these situations the current machines WE have could not simulate anything. Thankfully groups with the processing power are, the human brain project for example. If it ends up producing a conscious mind then its clear its just a matter of complexity and structure

>>

 No.18470

>>18469
for my own thoughts about what consciousness is:

I think consciousness is metacognition, the ability to think about ones own way of thinking. I also think its a spectrum. some think, some think about the way they think, some can even think about the way think about their thinking. going beyond that you would have some kind of ubermensch figure id imagine.

if you made a computer with all the universal laws we know today and all the information, all the rules of logic coded in what would it be missing? Why wouldnt it be conscious? Id imagine it likely wouldnt be able to solve completely new problems or improve on itself. In order to improve on yourself, you need to be able to recognize problems with the way you think about things or take certain tasks on. Again this is meta cognition. I think this is where many chat bots that supposedly can "learn" fail. All they are doing is taking in new information, there is little change in the way this information is applied.

What do we use to assess our own thoughts though? Sometimes its like a kind of check and balance deal. We can use certain ways of thinking to analyze our other ways of thinking. For example, logic to put reason to our emotions. We can even apply logic to logic, which it then tends to fall apart. Is it possible theres something better than logic though?

and possibly the best question is HOW something gains the ability to self assess. i have no ideas or guess for this one, and sadly its the most important question.

>>

 No.18474

Aren't neural networks an attempt to simulate the brain? I am a total noob so I could be completely wrong.

Being consciousness into it, nobody has any idea what defines something being conscious, see https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

>>

 No.18482

>>18474
Current neural networks are toys compared to a brain. The typical sort of problem they solve is predicting data from a set of training data points. For example, you might have a table of apartment prices in a given city. This table contains the following numbers:

>Area of the apartment

>Location (e.g. latitude, longitude)
>Number of bedrooms
>Total number of floors in the building
>The floor in which the apartment is at
>Price

Based on this table, you can train your neural network to predict the price based on the first 5 columns (there are standard algorithms for this). Then it can be used to predict the price for an apartment for which you *don't know* the price.

OKCupid uses this stuff to create better matches for users, cosmologists use this to estimate how far a galaxy is based on its colors, insurance companies to find which clients are likely to be involved in frauds, etc.

It's wishful thinking to assume that this can naturally extended for self aware systems with better computers.

>>

 No.18484

Here is a related WIRED article. It's essentially about attempting to empirically measure consciousness. http://www.wired.co.uk/news/archive/2015-10/07/how-much-consciousness-octopus-iphone

>>

 No.18487

>>18474
>>18482
A good half finished guide to understanding neural networks by writing code for them.
https://karpathy.github.io/neuralnets/
>But I hate JavaScript!
Then rewrite it in your favorite language.

>>

 No.18529

>>18470
for your definition of consciousness to work, you would need a formal definition of "i" and "self."

>>

 No.19985

>>18458
>Share your ideas. Maybe try something.
I think what method of AI to start out with would depend the purpose of the AI system we want to run. Even if it's general AI, considering what kind of thing we'd want to do with it first at least gives us something more specific to work towards.

Personally, I'm interested in a nice AI buddy who can control a bunch of NPCs or robot toys for the purpose of acting as an AI GM, and particularly for LARPing with small children who would otherwise fight overwho has to be the villian. I guess for that I'd want to start out with a system for automated creative writing?

>AI is the god we are looking for. It is the final answer.

Eh, this could be neat too.
I'm guessing for an artificial poser of deific status, the first thing I'd expect the a capacity to pass moal judgements. Attaining moral lessons is s stated usemost religions I know of, so it's what I think people wuld expect of an AI god. Fortunately, machine ethics is already a field:
https://en.wikipedia.org/wiki/Machine_ethics

Aside form that, I'm gueing other priorities of a machine overlord might include general strategy for managing its followers, resource management for handling church funds, information gathering and validation for actually knowing what it's working with, and negotiation/persuasion techniques to attract and direct followers. What to implement first is really a matter of what we want to take out of the hands of the human priesthod sooner.

>>

 No.19986

>>18458
>Maybe if the meaning of "consciousness" is formalized, an even simpler implementation can be made, which would be equivalent to a brain but extremely more efficient.
This article describes a theory of conciousness that claims it is rather simple:
http://www.sciencedaily.com/releases/2015/06/150623141911.htm
I think it underestimates the influence a translator can have, but still has some sense to it.

It actually reminds me a bit of the idea I had of hooking up a bunch of expert systems and chatbots into something like an automated Internet Oracle that passes questions and answers among its components until one eventually sends a message out of the system, except not made to be as ridiculous as possible on purpose.

>>

 No.19989

I'm so bored at glorified chatbots being held as AI revolutions at this point. AI is stuck in the 60s, throwing more processing power at the problem as if consciousness will just "happen" when you get a large-enough network.

>>

 No.19991

my plan for life is to go into AI research, but right now I'm still a pleb who doesn't know any theory so I'll wait till I get that PhD.

>>

 No.20013

General AI is a software problem now. We've had the hardware to do it for years. The reason AI researchers want huge supercomputers is that nobody knows what the hell they're doing so we try to brute force it, i.e. throw soykaf at the wall to see what sticks.

Consider an emulator like Dolphin. You could emulate a wii by interpreting each instruction and performing the appropriate action on the various bits of memory representing different parts of the chipset while also simulating each of the special purpose chips and how they interact with each other. This is not how Dolphin works because it would run like soykaf . It uses high level emulation and dynamic recompilation in order to use the native CPU and GPU efficiently.

You could probably get general intelligence with a large enough neural network or even more accurate simulation, but it's just not feasible. We need to reverse engineer the human brain in order to build something that behaves similarly but is implemented in a way that works much more efficiently with our hardware.

>>

 No.20017

Once we get enough computing power we can just simulate the brain as a blackbox wholesale.

Until then, understanding it is our best bet.
Which realistically won't happen anytime soon.

>>

 No.20026

>>20013
>>20017
Yes, we're very much still in a phase of biological mimicry. Along the way I think we will in the process learn a lot about ourselves, which is exciting.

>> 18470

To me this is the most interesting line of thought. Humans haven't even made a dent in the big philosophical questions yet. At the end of the day everything done on a digital computer is deterministic for a given input.

If the digital brain projects succeed, what does that say about human life and free will? I'm sure it would scare the soykaf out of many people, but I would be so pumped because humanity would be peeling back even more covers on how the universe works.

Of course the most grueling path is if the digital brains never work. It wouldn't really prove anything (except our own ineptitude), and it would leave massive questions about if our current models of computing are even powerful enough for this sort of thing.

>>

 No.20027

>>20026
>>18470
Fixed comment link because I suck at imageboard.

>>

 No.20028

>>20017
>>20026
>what does that say about human life and free will?
Absolutely nothing, that's what.

Determinism is quite silly, because it is about as provable as the existence of God, especially once you start thinking about these things on a metaphysical level.

>>

 No.20031

>>20028
>Determinism is quite silly
True, while determinism intuitively seems like an important quality, I think the discovery of undecidable problems illustrates that it doesn't have the consequences we would expect. That is, deterministic processes can result in arbitrary complexity and aren't necessarily predictable.

>>

 No.20032

>>20031
The main problem with determinism is that, at the end of the day, it doesn't mean anything. It's like saying A is A when you don't even know whether A really is A or it's just how it looks to you.

At best it's information that doesn't hold any value. At worst it's a delusion.

>>

 No.20037

>>19989
>as if consciousness will just "happen" when you get a large-enough network.
Most interesting comment in the whole thread.

You are right, it is perfectly clear that a machine that performs operations 10 times faster does not have 10 times more "consciousness" than it's predecessor. However, it is arguable that a machine which performs 10 times more TYPES of operations IS, in fact, 10 times more "conscious". This is because the machine would have access to 10 times as much information, and not just be 10 times faster at getting the same information as before.

It's not networks of power that are important, it's networks of ABSTRACTION. Anybody with experience in computer engineering understands that adding more adders does not make the computer any "smarter", only faster. Likewise, any software engineer could tell you that duplicating functions all over the place does not make the program more capable, only more needlessly confusing.

>>20032
I completely disagree. Determinism is extremely valuable. I don't mean "small" or "hard" determinism as in prophecy of quantum and atomic events, but "large" or "soft" determinism as in, if you cup water in your hands it will stay there, or if you then open your hands the water will fall out, or if you want your body to move, it will.

Determinism is the only reason evolution of ANYTHING happens at all. Belief in evolution IMPLIES belief in the deterministic nature of evolutionary pitfalls and rewards. If food didn't always nourish, then life would have had a much rougher time evolving "competently". If 2 + 2 were not always 4, computers would never have been invented or "evolved" to their current state.

It doesn't matter if an A is written in shorthand or cursive or in Times New Roman Sans Serif. It doesn't matter if the A is written in pencil, pen, or by expertly burning huge lines of forest. Any human (or intelligence) who has been taught to recognize the symbol 'A' will immediately recognize it anywhere, and to ask "what if that's not an A?" is quite literally insane.

>>

 No.20041

>>18458
>Maybe a simplified version of a brain capable of having thoughts could actually be simulated in a current computer.
You could argue that this has already been done: http://www.nengo.ca/
Granted, I'm not too big of a fan of how they perform 'neural' computations using the vector space architecture. It's a method that's more closely tied to motor output than abstract thought, but it's something.

To an extent, you could also say that deep neural networks is also a simplified version of the brain having thoughts. But only a fraction of the processes that a brain actually does, in this case, pattern classification (and only of trained patterns). You need more architecting if you want to get a system to perform something like pattern recognition of untrained patterns by mapping it onto ones that are known.

That leads us to a more fundamental question, that of representation. How the information about the world is represented will determine how that information may be processed, how it may be transmitted through some internal mapping mechanism for patterns, or something more external, like the words I'm typing.

>A brain makes more operations than a current machine can compute.

The main reason for this is a lack of a communication infrastructure in current machines. Whereas in the brain, neurons are connected on the order of thousands of other neurons, supercomputers are at best on the order of tens, perhaps hundreds. The organization of memory to compute is another limit in this, having to shuffle the memory around more or less sequentially. In the brain, the hardware is massively parallel in its construction. Which, by the way, hints towards the problem of representation again. How do components in a massively parallel system such as the brain, which operate under mostly local influences give rise to structures that are at a 'higher' level of thought? But more than just representation, this is a problem of coordination. How do all the components work together? I'm not entirely sure about how this'll be answered yet, but I'm fairly confident it'll be heavy into the realm of graph theory.

>>

 No.20042

>>18470
>the best question is HOW something gains the ability to self assess.
I'll throw my rough guess at this and maybe something will stick for you.

So, once you get past the level of simply processing the external world as an input/output device (kind of like deep neural nets these days), something you will get is an animal that is capable of reasoning about the world, but more importantly, creating an internal model of it. By creating an internal model, a person is able to play around inside their heads about something that's out there in the world, and subsequently make decisions based off of their experimentation.

A lot of animals don't really need this to survive, so never had a need to develop it fully, but in the case of humans... well, people are an incredibly social animal by comparison and survive primarily by forming communities where resources are shared. Pack animals as well, a large component of what they see in their external world is other animals like themselves. Predicting what others will do is important to a successful hunt in this case, signals such as body language and direction of movement will tell you where to and not to go.

Anyway, once you're able to reason about the external world and then reason about entities in the external world that are similar to you, it's not that far of a leap to then reason about yourself, as there are also things that you do that can be seen by your senses. And predicting what will happen if you (yourself) does something is also important. There's actually a period in development where a child is around two years old where they start to begin recognizing that it's them in a mirror. Before that, you could say that there was no solid sense of self.

This brings us back to representation: if your internal model is something that you can 'sense', then you've got it made, because then you can have an internal model of your internal models.

>>

 No.20043

>>19991
Best of luck! Here's a tip: learn everything. Also, don't wait until you get a PhD, it'll be too late, start learning bits and pieces of the puzzle now. And be flexible to new information.

>>20013
>You could probably get general intelligence with a large enough neural network or even more accurate simulation, but it's just not feasible.
Scale isn't the problem here. The architecture of the network is, what connects to what. That's the main reason why I think we need to reverse engineer the brain, so that we actually understand what we're putting on the computer.

>>20026
>Of course the most grueling path is if the digital brains never work.
There's nothing physically preventing them from working, we just have to keep pushing in that direction is all.

>>

 No.20045

>>18458
>A brain makes more operations than a current machine can compute.
The "operations" the brain makes are not directly comparable to that of a computer. People like to look at numbers and say "oh the brain has 100 billion neurons while our biggest IC has 20 billion transistors, that means we're 1/5 of the way there!" when in reality they're not really comparable at all.
>>20043
>Scale isn't the problem here. The architecture of the network is, what connects to what.
If you have enough compute power you can just try every possibility and eventually get something that works. That's more or less how evolution did it. We don't which is why we have to be clever.

>>

 No.20046

>>20045
For evolution, you need pressure for certain things to survive and others to die out. Simple combinatorics isn't going to cut it, and would probably take much more time than if you restarted the universe. That said, we also don't have the resources not to be clever.

>>

 No.20047

>>20037
Exactly, determinism in the sense of f(x) always equals y is extremely valuable. All of our digital tech is a very deliberately crafted abstraction to eliminate uncertainty/noise from the physical world (see Claude Shannon's info theory work).

All of our digital computers are still analog machines under the hood because they physically exist, but we force them to represent and operate on these "perfect" abstractions of information (1 is a 1, 0 is a 0). In 20037's example, the idea of the letter A is the abstraction, but the various physical depictions of the letter are only the analog representations that transmit that idea between people.

Another way of framing it is that digital computers are completely driven by cause and effect, where the input into the computer is the "cause". If a brain can be fully simulated on a computer, does that mean that humans operate under this rigid cause and effect? If you subjected a human to the same scenario say 1000 times with perfect start conditions (I know I know), would the human react the same way every time? Or is there some extra intangible that allows the human to deviate?

Like other people in the thread have mentioned, maybe humans are just computers but with a dizzying level of complexity in our wiring. Or maybe there's something else to it. We don't know.

>>

 No.20049

>>20047
>determinism in the sense of f(x) always equals y is extremely valuable
but determinism does not work that way
>is there some extra intangible that allows the human to deviate?
It's called "quantum mechanics" and is (at least according to most modern science) purely random. It doesn't generate consciousness or give us superiority over ordinary matter or any of that nonsense people like to cling to, it's just another input to every system that we can't control.
>computers are completely driven by cause and effect
Entropy is some crazy soykaf . One could argue that this distinction is meaningless. Cause is simply the event that happens when there is less entropy, but simultaneity is relative. Maybe I don't eat because I was hungry, but I'm hungry because I'm about to eat.

Your mistake is that you're giving attributes to deterministic processes that just aren't there. Deterministic does not mean predictable. Computers (in the abstract sense) can generate arbitrary and unpredictable complexity. For an example look into the "busy beaver" function. They are deterministic in the sense that running the same program for the same amount of time gives the same answer, but that does not make them weaker.

>>

 No.20055

>>20046
>For evolution, you need pressure for certain things to survive and others to die out.
Natural selection is still too slow. There's a reason hill climbing algorithms and genetic programming didn't cause robots to take over the world.

>>

 No.20056

>>20049
>It's called "quantum mechanics" and is (at least according to most modern science) purely random.
Yeah, but is the RNG deterministic?

>>

 No.20057

>>20049
Largely agree. My writing gets pretty fast and loose. The calling out of predictability vs. reproducibility is subtle but pretty important.

>It's called "quantum mechanics" and is (at least according to most modern science) purely random.

Big point. Always the nasty wrench in the gears when theorizing about "perfect initial conditions".

I've always wondered how important quantum randomness is to the way things are. How much do quantum effects "trickle up" to the macro level where we live? Would sentient life be possible without it? No clue.

>>

 No.20059

>>20057
I heard that there's apparently a structure in the brain that is responsible for random permutations in timing. Don't have a source on this though.

>>

 No.20064

>>20057
>>20059
http://www.bbc.com/news/science-environment-21150047
>"There are definitely three areas that have turned out to be manifestly quantum," Dr Turin told the BBC. "These three things... have dispelled the idea that quantum mechanics had nothing to say about biology."
The article points to photosynthesis, bird navigation(their eyes), and scent detection. Pretty interesting.

The kneejerk reaction I had was to rhetorically asking "Does any one molecule of gas affect the whole?" but it seems that quantum effects may actually be selected for in nature.

>>20059
"Random permutations in timing" reminds me of one of the 10,000 year clocks. It uses a combination of reliable but inaccurate and accurate but unreliable timing methods, those being manual winding (whenever a person is around) and tracking the sun at zenith (whenever the sun says it's noon).

I seem to remember reading something similar about human internal clocks, that they relied on both daylight and their own tracking to keep time, but I don't have a source either. Very intriguing technique, regardless.

>>

 No.20065

>>20064
>Does any one molecule of gas affect the whole?
To answer the rhetorical question: Not necessarily, unless the whole is constructed to, say, take advantage of the reactions caused by this particular molecule. It's a statistical problem. One molecule may mean nothing, but there's typically many more than just one molecule of a certain gas floating around, and catalysts help to increase the rate of reaction, even in trace amounts.

>quantum effects may actually be selected for in nature.

I don't see a reason why it shouldn't be. It's not like nature 'cares' whether or not we understand a phenomena to be able to use it for some evolutionary advantage.

>>

 No.20067

>>20013
The problem with this is that the wii and the pc dolphin runs on are very similar. For every call to the chipset or the memory, Dolphin either emulates it, or lets the system it's running on handle it.
The problem with this kind of approach for brain-emulating, is that brains are completely different. There's not just 1's and 0's, there's lots of interconnected "brain gates" that resemble the normal logic gates in no way. In order to be able to emulate brain activity in stead of simulating it, you'd first need a processing unit that resembles brains a lot more then the current soykaf. I mean, the normal cpu's are out of question for any sort of emulation, and the projects that are working on making neural cpu's are not very advanced.
The resemblance needed between two cpu's is just not there in brains and computers. Emulation will not work at the current stage of cpu development, and simulating brainhardware is, though far more resource-intensive, far more cheap then developing a complete new sort of computer (for now at least)

>>

 No.20077

>>20056
This is a good question I think we would all like to know the answer to, but in the end it doesn't matter and we can probably never know. A good enough PRNG is indistinguishable from true random.
If people want to say that the brain needs a high quality RNG to work its magic that doesn't really hurt the robots as we already use quantum effects in computers for cryptography (see https://en.wikipedia.org/wiki/Comparison_of_hardware_random_number_generators.)
>>20046
Evolution is essentially minimax with alpha-beta pruning. There is some cleverness there but not all that much. I think we have surpassed evolution in that respect, so if we were able to simulate on the scale and timeline of evolution we could probably create something intelligent quicker.
>>20067
>the wii and the pc dolphin runs on are very similar
>brains are completely different
This is true, however
>you'd first need a processing unit that resembles brains a lot more then the current soykaf
I don't think this need be the case. AI doesn't have to work like us, it just has to act like us. Reverse engineering the brain can give us clues to how we can emulate it, but our implementation can be completely different. Of course if we want to "upload our consciousness" you're probably right, but not for creating entirely new intelligence. Keep in mind the brain was brought to you by the same process that gave us the appendix and wisdom teeth. It's probably far from optimal.

>>

 No.20078

>>20067
This reminds me of another article that I can't find the link to at the moment. The gist was that neurons in the brain perform more complicated operations on their input signals than originally thought, and the takeaway was that the analogy of neurons to single gates in a chip is probably too simplistic.

>>

 No.20331

File: 1449116380119.jpg (24.64 KB, 266x400, astral.jpg) ImgOps Exif iqdb

>>18458
>Maybe if the meaning of "consciousness" is formalized, an even simpler implementation can be made, which would be equivalent to a brain but extremely more efficient.
I like Michio Kaku's definition of consciousness, myself:
https://www.youtube.com/watch?v=0GS2rxROcPo

>AI is the god we are looking for. It is the final answer.

maybe, assuming there's such a thing as a final answer.

>>

 No.20332

>>20331
Sorry but that guys definition is complete bullsoykaf. Consciousness is by its nature, not quantifiable. He's doing the usual thing people do where they take one of the soft problems of consciousness (in this case being able to react to external input) and equate this to the hard problem of consciousness (i.e. why do we actually experience things?). There is no evidence for a link between responses to stimuli and consciousness and his quantification of consciousness is in fact nothing more than a quantification of capability to respond to stimuli. Moreover, there is no experiment I could construct to disprove his theory that responses to stimuli and consciousness are linked meaning it's scientifically worthless.

This is a well explored problem. It's been well known for a long time that taking some other properties of conscious beings and equating it to consciousness is scientifically incredibly unsound. Moreover it's been known that there are no scientifically sound ways to study consciousness. One cannot use empiricism to study unmeasurable phenomena. I think there's just something about the problem and wanting to understand our own existence that draws people to it anyway.

>>

 No.20412

>>18458
So if I'm understanding you correctly, as far as we can tell consciousness is irrevocably subjective?



Delete Post [ ]
[ cyb / tech / λ / layer ] [ zzz / drg / lit / diy / art ] [ w / rpg / r ] [ q ] [ / ] [ popular / ???? / rules / radio / $$ / news ] [ volafile / uboa / sushi / LainTV / lewd ]