What is "Mind?"

Search
Go

Discussion Topic

Return to Forum List
This thread has been locked
Messages 14001 - 14020 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
MH2

Boulder climber
Andy Cairns
Jun 17, 2017 - 09:00am PT
Well!

Hubert Dreyfus says that he is not smart and does not understand computers but makes up for that by being lucky.

He appears to have a wide and deep knowledge of philosophy.

His contention that human intelligence can't be written down as a set of rules for a computer to refer to seems reasonable.

I like the distinction between intrinsic meaning (whatever that may mean) and created meaning, which is perhaps the only kind of meaning there is.

Dreyfus points out that one runs into difficulties when one keeps asking, "Why? Why? Why?" If you keep going down, down, down, looking for a solid basis to build everything from, you may not find it.


Go the other direction.


Another philosopher says:

"My favorite writers of all (on this as well as many other matters) are the Taoists such as Laotse, Liehtse, and Chuangtse. They give neither analogies nor any rational explanations whatsoever! In total defiance of all logic, they soar their merry way upward like birds in free flight."

Raymond Smullyan
The Zen of Life and Death




Hubert Dreyfus says:

"I don’t think my being has any center. I don’t see how you can be open without being empty. I don’t understand how to put together some human potential inner stuff with this openness. I don’t think you find – this is just Kierkegaardian, Nietzschian, Heideggarian stuff, there’s nothing to find inside yourself which is your special talent or destiny or whatever. When I think of how I hated the computer people and how I devoted my life to that for 20 years, what I followed was my sense of outrage."



Why did he hate the computer people?

They will continue to find new ways to do things whether the foundations are in place or not. There are probably things they will not be able to do, but we don't need to worry about those.


yanqui

climber
Balcarce, Argentina
Jun 17, 2017 - 09:35am PT
Did you read the interview, MH2? It's pretty long but I think it's full of little gems. If you can just cut Dreyfus a little slack over the way he seems to think he's won the AI debate and focus on what he has to say about how people find meaning in life, you might find it interesting.

PS (edit to add): I have no idea why he (Dreyfus) hated the AI people so much. Even though Dreyfus often pooh-poohs reflection, it seems to me some of his best moments come when he is, well, reflective. I believe his brother was actually working with the AI people and later on, after his criticisms, he says in the interview that the AI people tried to block his career, but that came later and doesn't seem to explain why he was so angry and outraged with them in the first place.

My own philosophical training came in a strong rationalist tradition, and I could never quite figure out what the existentialists were talking about. My philosophy teacher (Steve Scott) kind of liked Kierkegaard's stuff and I read a little bit by him, but I can't imagine Scott touching Heidegger or Nietzsche with a 10-foot pole.
MikeL

Social climber
Southern Arizona
Jun 17, 2017 - 10:07am PT
Largo,

I like the phrase, “. . . give the mind something to chew on, like giving a dog a bone . . . .”


I’ve been reading through my notebooks of various books I’ve read and wanted to remember. It’s sort of like looking at old photographs for me. A few of the books concern emotions, because emotions have presented puzzles to cognitive scientists. In the past, they’ve generally simply ignored emotions or treated them as thoughts (concepts). What they tended to say historically was that people first make evaluations of their situations, and then they generate and experience emotions that are associated with those evaluations. I see a snake, I evaluate the snake as dangerous, and then I get the fight or flight emotional feeling coursing through my veins.

Some cognitive scientists, however, have proposed embodied cognitive theories, which argue that cognition arises from empirical (sensations) that the body generates from its interactions with the environment. This applies even to the most abstract conceptualizations like trust, legitimacy, etc.. The theories have been buttressed with empirical research. These researchers have extended what James-Lange first recognized: when bodily sensations subside, so do emotions. Emotions are embodied. It is the body that first senses a situation and expresses the composite sensation as an emotion. Then conscious, cognitive evaluations associated with that feeling arise into consciousness.

Emotions (otherwise known as “hot cognition”) have been conceptualized as modularities (fast processing systems) that respond to proprietary inputs which cannot be directly influenced by other (often conscious) processing systems. It’s a way of thinking by feeling (or feeling instead of thinking). Emotions are causal consequences of bodily changes. Analogically, the beep of a fuzzbuster does not describe what it represents. It simply represents the detection of police radars, and it is reliably caused by them and set-up for that purpose. Emotions are embodied appraisals under the triggering of what have been referred to as calibration files in long-term memory.

As an aside, emotions refer to specific things are situations, whereas moods may refer to things or situations quite generally. Sadness represents a particular loss, whereas depression seems to refer to an on-running losing battle. Emotions tend to be responses to immediate situational challenges, whereas moods to enduring ones.

All emotions seem to function as motives. Motives seem to provide reasons for action—impelling people to act. Emotions are valence markers—commands to change or sustain internal states. People seem to regulate their internal states by regulating their behaviors, and motives / motivations seem to do that work. They specify action goals rather than inner state goals. The valence of all emotions tell us to change how we are feeling, and motivation tells us to change how we are acting. (See Damasio’s works in the mid 90s here.)

We can imagine then that emotions are a kind of unconscious (and fast) early warning system that detects problems and alerts us to them. Unconsciously, they can prepare us for behavioral responses, they can initiate thinking processes, they can perhaps embody cultural values, and they may motivate moral conduct. Having an emotion is a way of perceiving one’s place in the world. Emotions are perceptions of the body’s preparation for action. They may also signal or represent the conclusions to our arguments. (There are some here on ST whose expressions tend to be particularly emotional.)

In most instances, the body registers perturbations before people have reflected on their situations (very often expressed facially, unbeknownst to them). Emotions can enter awareness before the cues that triggered the emotions can be consciously assessed.

There are three reasons to mention these ideas about emotions here in this thread: (i) emotions are surely a part of mind that we’ve not talked about here; (ii) Honnold’s ability to manage his fear on Freerider (and cognitive science’s interest in it); and (iii) the use of meditation as a means to recalibrate the calibration files in long term memory that indicate when an emotion should arise. (This last point can also be formulated as: how one can learn to deal with aversions (and attractions similarly)?

(i) Obvious.

(ii) It’s been claimed that some researchers have measured or observed Honnold’s brain in order to learn how he manages fear so well. Ascetics have supposedly been able to avoid pain and actually pursue discomfort in order to seek self-mastery. Dangerous situations may provide benefits that outweigh risk. Fear could well be a positive emotion through learning and experience. Extreme thrill-seeking behaviors can be explained in this way. Being indifferent to danger may result from an emotional combination of joy and fear as a thrill or exhilaration. (Of course a flow state is probably in evidence as well.)

(iii) We do not seem to be in control of our emotions. They just happen to us; they are irruptive. However, one may be able to recalibrate the calibration files of an emotion through the effects of cultural standards, through learning, and through experience. A calibration file is one that specifies what kind of information can serve as an emotional trigger. If people can listen or are able to observe their emotions over an extended period of time, they may be able to avoid being swayed by feelings (that they cannot normally not see into). Mediation, over an extended period of time, can reprogram or deprogram “hot” emotional responses by recalibrating the calibration files of the most basic emotions in long-term memory.

Of course all of this above presents interpretations, and I’ve been an invested member of a few of these discussions academically.

Be well.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Jun 17, 2017 - 11:56am PT
Dryfus is an interesting study. His book, "What computers can't do," spotted the fallacies under which Strong AI still struggles. Dryfus listed four primary assumptions of AI research.

"The biological assumption is that the brain is analogous to computer hardware and the mind is analogous to computer software. The psychological assumption is that the mind works by performing discrete computations (in the form of algorithmic rules) on discrete representations or symbols.

"The plausibility of the psychological assumption rests on two others: the epistemological and ontological assumptions.

"The epistemological assumption is that all activity (either by animate or inanimate objects) can be formalised (mathematically) in the form of predictive rules or laws. The ontological assumption is that reality consists entirely of a set of mutually independent, atomic (indivisible) facts.

"It's because of the epistemological assumption that workers in the field argue that intelligence is the same as formal rule-following, and it's because of the ontological one that they argue that human knowledge consists entirely of internal representations of reality."

It's interesting to realize that these insights prefigured most of the philosophical debate that followed regarding AI and consciousness.

In my mind, the log jam occurred by way of two false assumptions arising from reckless conflation:

The first was the conflation of semantics (meaning and understanding) and syntax (rules). See Searl's Chinese Room thought experiment as a primer to this topic.

The second was the conflation of mechanisms with consciousness.

In the 1990s, smelling a fly in the ointment (or "a turd in the punch bowl," as it were), Chalmers started questioning the "mind as mechanism" credo, pointing out that computers can't do experience. That is, even the most robust machine has no experience of it's own processing, is unaware that it is a machine, and that the semantic value we experience as humans is entirely lacking in computers et al.

Functionalists argued that the brain is a "syntactic engine" (a mechanism governed by predictive rules or laws). Chalmers said: Show me how. By what rules and laws does the brain mechanically source the experience all of us humans have while considering this question, or by simply being alive and experiencing (fill in the blank). That is the essence of the Hard Problem.

Basically, the Hard Problem forces the hand of physicalists to demonstrate or at any rate to explain the how and why of their assumptions.

Their responses come in two basic flavors: first is to say there is no Hard Problem, but this basically requires them to deny or explain away the very consciousness they use to formulate their arguments.

The second flavor involves the reckless conflation of syntax and semantics, or epistemological and ontological properties, of objective and subjective, internal and external, or the mechanistic and the experiential - and here is where Dryfus saw the writing on the wall, and where Searl has stuck people's faces into it for going on 40 years.

The false assumption of this second flavor is that, going back to Dryfus, so long as you have properly formalised (mathematically) consciousness into predictive rules or laws, AI can build a conscious machine.

And so people quite naturally ask: If the psychological IS the mechanical, how is that achieved? Put differently, what is the relationship between the syntactic and semantic, the epistemological and ontological, the objective and subjective, internal and external, the mechanistic and the experiential? Strong AI folks are on the hook to answer this because they can't possibly build what they can't chart out in causal terms.

The typical answer also has two main flavors. The first is to simply say that when properly formulated, the semantic IS the syntactic, the objective IS subjective. It is crucial to grasp that this reply is NOT saying that the mechanical creates or sources or underscores ("biological substrate") the experiential/semantic, rather it IS the experiential/semantic. No difference. The same. The machine is inherently conscious. The only logical conclusion here is that the semantic, ontological, subjective, internal, and experiential are fundamental to the syntactic, objective, external, etc. Few making this argument would agree with the inevitable conclusion.

The second flavor is to say that the experiential etc. is mechanical blow back, a phenomenon that "emerges" from the biomechanical source.

The problem here is this runs totally counter to the reductionism driving virtually all mechanistic thinking per the macro scale. A mechanism, beholden to predictive laws and rules is never more than its composite parts.

What's more, a reference to parts alone tells us nothing about what consciousness, itself, IS. It doesn't answer nor yet demonstrate WHAT emerges. And if we admit that SOMETHING emerges, then we are left to define and describe the difference between WHAT emerged from the host or source. Emergence implies something more than the parts of building blocks. Reductionism says this isn't and can't be so. Yes, sugar is more than the Kreb Cycle, but you get the idea.

That forces us to ultimately look at what mind, itself, actually is, which figuratively speaking is akin to studying music sans reference to the instruments said to produce it. The analogy is inexact because sound waves are external physical phenomenon we can get hold of with our sense organs and our measuring devices, and consciousness is not.

To me, this all underscores the fact that when we ask, "What is mind," we are asking both subjective and objective questions, but while it is widely accepted to investigate the objective in purported "observer-independent" ways, probing consciousness itself remains, for many, an inquiry to try and explain away, dodge, and tap dance around.
paul roehl

Boulder climber
california
Jun 17, 2017 - 12:27pm PT
Great post: you mean to tell me those tech/science guys are assuming something. Well, that's a shock!
jgill

Boulder climber
The high prairie of southern Colorado
Jun 17, 2017 - 12:43pm PT
And so it's meditation vs neuroscience, when the most appropriate way forward is to combine the two. But without the discursive element this would be futile.

The "music sans instrument" analogy might not be right, for that could mean writing down notes, harmonies, pauses, etc. Almost algorithmic with a bit of mathematics.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Jun 17, 2017 - 02:01pm PT
And so it's meditation vs neuroscience, when the most appropriate way forward is to combine the two. But without the discursive element this would be futile.
-

In my opinion, probing mind as mind does not have to be meditation, which in practice is usually a highly formalized method of observing or being in mind directly. Simply observing sans any cultural trappings is fine, just as you would observe any other phenomenon, but postponing evaluations till the observational process has enough bandwidth to mean something. For most people, simply being in perception is impossible. They get shanghaied by observing content, usually thoughts and evaluations.

Second, there are more ways to knowing then discursive wrangling, and I'm not talking about imagining or "revealed wisdom," which is simply searching out an external source (God?) or cause for the knowing.

But this is more easily experienced than described because it does not involve knowing in terms of external objects or stuff.
MikeL

Social climber
Southern Arizona
Jun 17, 2017 - 03:00pm PT
Largo: Functionalists argued that the brain is a "syntactic engine" (a mechanism governed by predictive rules or laws).


“The brain” as swiss army knife?

Cogent post, John.

. . . there are more ways to knowing then discursive wrangling. . . .

Indeed there surely would seem to be. I reported another above as “hot cognition.” There are also myth / stories. There is also instinct. There is also the so-called magic consciousness of primitives (way back in the day of civilization). And then there would be kensho, satori, or more integrative means of knowing that subsume and integrate all of these knowledge means together. (But that leads us into other areas of conversation where words no longer provide much definitive traction.) Some have also argued that love is another way of knowing. Not all knowing comes through mental abstractions and empirical testing.
MH2

Boulder climber
Andy Cairns
Jun 17, 2017 - 03:31pm PT
Thanks, yanqui. I do like Dreyfus. I did read the long article.




edit:

Though I mention these considerable hurdles:

This is a blog for transformational thinking enthusiasts.

John P. Hanley, Jr. is a management consultant, coach and trainer with 15 years experience.


Alvin Toffler may have seen it coming, along with employment opportunities for climbing gym instructors.
Byran

climber
Half Dome Village
Jun 17, 2017 - 04:09pm PT
A mechanism, beholden to predictive laws and rules is never more than its composite parts.
"Reductionists" don't make this claim, you're just taking shots at a strawman. The parts of an airplane are not individually capable of flight. The parts of your headlamp are not individually capable of producing light. Reductionists do not doubt these things. From wikipedia: "Reductionism also does not preclude the existence of what might be called emergent phenomena, but it does imply the ability to understand those phenomena completely in terms of the processes from which they are composed."

What's more, a reference to parts alone tells us nothing about what consciousness, itself, IS. It doesn't answer nor yet demonstrate WHAT emerges. And if we admit that SOMETHING emerges, then we are left to define and describe the difference between WHAT emerged from the host or source. Emergence implies something more than the parts of building blocks. Reductionism says this isn't and can't be so.

Yes, you need more than the building blocks. They also need to be "put together", arranged with respect to one another. The parts of an airplane do not fly if they are just thrown together in a scrap-heap. Other processes, reactions, or environmental factors are also necessary. A perfectly assembled lightbulb will not produce light unless electricity is passed through it. But if all factors and variables are accounted for, then emergent properties are consistent with reductionist theory.
WBraun

climber
Jun 17, 2017 - 04:52pm PT
You left out intelligence and a living entity.

Then .... without God the source of everything in the whole manifestation there would be no light period nor any living entity.

This is absolute fact and no material scientist can ever deny unless they're stoopid ......
MH2

Boulder climber
Andy Cairns
Jun 17, 2017 - 06:46pm PT
My ideas come from this guy, Sam Todes, – the graduate student I liked so much when I was a graduate student.

Hubert Dreyfus, advising us to take our philosophy where we find it.





Robert and I both take our philosophy from three main sources.




http://en.wikipedia.org/wiki/Utah_Phillips


http://en.wikipedia.org/wiki/Danny_Finkleman


http://en.wikipedia.org/wiki/Richard_Proenneke



Personally, I give honorable mention to Ken Finkleman, brother of Danny.


[Click to View YouTube Video]







Back to you, Hubert.


I wanted to go to MIT because I liked to make bombs when I was a kid. I figured I’d learn how to make better bombs at MIT.










MikeL

Social climber
Southern Arizona
Jun 17, 2017 - 07:39pm PT
Bryan,

Consciousness is a far cry away from any airplane or light bulb in complexity, don’t you think?

Strawman?
Byran

climber
Half Dome Village
Jun 17, 2017 - 08:38pm PT
Bryan,

Consciousness is a far cry away from any airplane or light bulb in complexity, don’t you think?

Strawman?

Are you suggesting that above a certain threshold of complexity, reductionism is no longer tenable? That a sufficiently complex system produces effects with no prior cause?
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Jun 17, 2017 - 08:58pm PT
What's more, a reference to parts alone tells us nothing about what consciousness, itself, IS. It doesn't answer nor yet demonstrate WHAT emerges. And if we admit that SOMETHING emerges, then we are left to define and describe the difference between WHAT emerged from the host or source. Emergence implies something more than the parts of building blocks. Reductionism says this isn't and can't be so.

Yes, you need more than the building blocks. They also need to be "put together", arranged with respect to one another. The parts of an airplane do not fly if they are just thrown together in a scrap-heap. Other processes, reactions, or environmental factors are also necessary. A perfectly assembled lightbulb will not produce light unless electricity is passed through it. But if all factors and variables are accounted for, then emergent properties are consistent with reductionist theory.

------


This is still off base when it comes to consciousness.

With an airplane, you are describing what the assembled parts DO, thanks to the other factors involved, as you mentioned. And when we ask, "What IS flight," a purely physical description suffices.

There is nothing beyond the physical phenomenon.

Applying the analogy to consciousness results in such howlers as: Consciousness is what the brain DOES. This tells us nothing about consciousness itself, while a physical description of the plane in motion tells us all there is to know about flight.

Simply put, flight has no semantic, inner, subjective, sentient, or experiential aspect to account for.

The reason many expect a physical account of brain function will totally "explain" consciousness is because the same logic works for a plane and for flight.

The plane/flight model also fosters false AI arguments to the effect that if you only have the parts of the brain ARRANGED correctly, consciousness will emerge. Even if this were true it would only cover the causal aspects of consciousness, telling us nothing about a phenomenon (sentience) which is categorically different than a physical external object (brain). And the booby prize: emergence begets you the Hard Problem - no escaping that.

Again, we can clearly see how, according to physical rules and predictive laws, a plane can take flight. Applying the relevant rules and laws to the brain goes nowhere in telling us how consciousness purportedly emerges. It just runs into our old friend, conflation.

If nothing else, this underscores some of the challenges of trying to only use physical rules and predictive laws (Dryfus) to understand consciousness.

But your points are well taken.

Now if you want to go a few rounds with the "complexity argument," I'm in.

For the moment, there is no evidence whatsoever that complexity contributes to sentience whatsoever. And if you're talking about external forces or objects, reductionism does well. Know however that attempts to posit a purely electrochemical causal explanation for consciousness favor global activation and feedback loops, the activity at this level not explainable by the function of parts at a lower level.

What I think you are really driving at is a physical causal model with no breaks in the chain.
jgill

Boulder climber
The high prairie of southern Colorado
Jun 17, 2017 - 09:35pm PT
The cellular automata programs I wrote begin with a few rules on combining adjacent values in the top row to produce offspring in the second row. Then the same thing happens to produce the third row. And on and on. By the time the 200th row rolls around a beautiful pattern begins to emerge. In theory one might attempt to follow the algorithm, step by step, but for all intents and purposes it can't be done. Thus a causative process produces an interesting result, but one cannot intellectually backtrack and "reduce" what has materialized.

If someone else wrote the original rules, not divulging them, I would be tempted to conclude that the results, like consciousness, arise from a physical setting but the "Hard Problem" is beyond me because I can't explicate each stage.

This is every bit as solid an approach to mind and consciousness as introspection into one's mind.
Byran

climber
Half Dome Village
Jun 17, 2017 - 10:54pm PT
For the moment, there is no evidence whatsoever that complexity contributes to sentience whatsoever. And if you're talking about external forces or objects, reductionism does well. Know however that attempts to posit a purely electrochemical causal explanation for consciousness favor global activation and feedback loops, the activity at this level not explainable by the function of parts at a lower level.

Complexity may not create sentience, but it does probably constrain it. Your mind wouldn't be capable of visualizing something in trichromatic color if it weren't for the three types of cone cells in your eyes. The same goes for things like memory formation, emotions, sense of identity, ect... All depend on certain structures in the brain. We know these faculties are liable to be altered if the brain is damaged, so it stands to reason that a less complex brain which is lacking these structures would be incapable of producing those mental processes.

As to what "level" sentience operates at, who knows? It could be that there are all kinds of different sentience at all different levels. There's no possible way to find out. Perhaps a pot of water "experiences" boiling, but such an experience surely must be so radically different from any human experience that I'd hesitate to call it "conscious". I suspect that any sort of complex or interesting sentience is constrained by two factors: information exchange and organization. The stars in our galaxy exchange large quantities of information (electromagnetic radiation), however there is no particular organization to the structure so it's hard to imagine how any sort of "galactic mind" could be capable of meaningful thoughts. Brains are, thanks to natural selection, highly organized so that the information exchanged actually means something. Computers are likewise organized and capable of huge volumes of information exchange, which is why AI is another likely candidate if we're looking for meaningful sentience in the universe.

What I think you are really driving at is a physical causal model with no breaks in the chain.

Yes, that's the dream. The "theory of everything". Too bad it's logically impossible. Any causal chain ultimately butts up against the problem of infinite regression. Inevitably some action must have occurred without cause, or else it was self-caused (an infinite loop of causation). Either possibility is absurd and totally antithetical to the theory of cause-and-effect, and no other alternative is even conceivable. However, purporting that the human mind is also an uncaused agent only multiplies the problem and there is no evidence to support such a claim.
healyje

Trad climber
Portland, Oregon
Jun 18, 2017 - 02:37am PT
Dryfus is an interesting study. His book, "What computers can't do,"...

Crikey, are we back to the computer thing? And back to the seriously deficient brain vs computer nonsense. Again, both are pointless and just way out of your league.

Epstein with his "The empty brain" article, and for all the facts at his disposal, generally ends up way more wrong than right due to his sticking with and pushing the simplistic premise he is selling. He's also wide of the mark because he doesn't have a particularly deep understanding of computers, software, AI, or the brain for that matter.

He starts out with this:

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

Which pretty much rates a thudding 'duh' on the richter scale of the obvious. He then does a traipse down the memory lane of all the 'wrong' metaphors with which humans have attempted to account for intelligence, thereby setting the stage for his tacking the Information Processing (IP) metaphor onto the end of that 'failed' list (his basic premise):

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

And then he wraps up on the wrongness of our metaphors by going off on AI in general and Kurzweil who is a clever, if over-the-top, inventor and speculator with a lot of interesting and odd ideas but who is hardly representative of general thinking in the various fields under this rubric:

Ray Kurzweil’s book How to Create a Mind: The Secret of Human Thought Revealed (2013), exemplifies this perspective, speculating about the ‘algorithms’ of the brain, how the brain ‘processes data’, and even how it superficially resembles integrated circuits in its structure.

He then wanders through a bunch of examples which discredit the IP metaphor as if that was really necessary - there is simply no direct correlation between how computers and brains do what they do. Another 'duh'. Next he does a brief tour / survey of contemporary anti- and non-IP, or 'out-of-the-box' thinking on the subject:

The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

Epstein then goes on to dismiss the notion held by Kurzweil, Hawking and a host of neuroscientists that we will someday be able to 'download' our minds into computers. Here at least his dismissal is on target; it's a delusional proposition that's as ridiculous as a million-person Mars colony (but you kind of have to forgive Hawking for hoping it would be possible some day). It's an idea fundamentally at odds with what we do know about both brains and computers: you can't capture and download an active and evolving biological state machine. The only way to get a Hawking-in-a-box would be to reproduce all the starting conditions for a Hawking, exactly reproduce / model human growth and mis-growth, throw the switch and see if the computer can grow / evolve a Hawking from scratch that thinks just like him and do it in some amount of time faster than it takes to grow an actual Hawking. The problem with that is you could never model all of the environmental inputs and experiences of his life that produced a Hawking. Again, 'duh'.

Finally Epstein does start to drift closer to making a cogent argument for the fundamental problem with modeling brains / mind on a computer of any kind - the inherent complexity of the brain combined with the fact it is active, ever-changing and evolving. But even here he is somewhat wide of the mark due to having incomplete knowledge of numerous relevant fields and the fact he doesn't really 'get' the reasons for the validity in what he's saying even as he gets the gist of it basically right.

He gets it right on the modeling / complexity front: every aspect and level of the brain exhibits extreme complexity, deep hierarchy and a high level of distribution - but he doesn't seem to understand just how complex the brain is. To roughly break it down, those general brain 'levels' look more or less like this:



 Whole Organ:

As is obvious from this thread, it's difficult to make too many grand assertions about the organ as a whole other than we have one and it's doing something. But its complexity, energy budget and shielding / protections should, by themselves, say something significant about the enormous advantages it provided in our evolution. And those three characteristics alone are unusual enough to make the brain unique among all our various organs.

 Regions:

The brain is composed of a number of discrete sub-organs and regions and a number which are far less discrete. We can say a lot about these various sub-organs and regions, but the main thing to understand is they all operate both individually and in concert and coordination with one another in highly complex ways - sometimes hierarchically, sometimes in a distributed fashion and more often both.

 Circuits:

The brain's 'circuitry' or interconnections - its 'connectome' - is comprised of approximately a 100 billion neurons which are actually outnumbered by glia cells. We've focused our research over time on neuronal circuits, which we know quite a bit about, and less on glia circuits and integrations so we know far less about them in terms of how they are organized, how they are interconnected with each other, neurons and other brain tissues, and what it is they 'do'. Neurons alone exhibit roughly a 100 trillion synaptic connections; the number becomes ridiculous if your throw the glia connections into the mix.

 Cellular:

Neurons in turn come in basic three forms: sensory, motor and interneurons. Glia cells also come in three basic forms: astrocytes, oligodendrocytes and microglia. All these cells are highly 'active' in connections, signalling, chemistry and gene expression. And the average neuron has about 7k synapse. But it doesn't stop there - for instance, we know neurons and neuronal activity control gene expression in glia astrocytes to regulate their development and metabolism. That tidbit by itself shoots the complexity involved with cellular regulation in the brain through the roof.

 Molecular

Epstein describes this signalling level well enough saying there are more than 1k synaptic proteins. These proteins though are in constantly changing relationships with one another and the neuron / glia cells making up the synaptic gaps and glia interconnects. The production and mix of these proteins is also highly volatile and rapidly changing over time.



Bottom line is that the brain is complex beyond our ability to ever accurately, let alone faithfully, model it. Just starting at the bottom of the stack, modeling the combination of synaptic protein state, synaptic activity, neuronal electrical state and on-going, active gene expression of even a cubic millimeter of brain tissue is not something that is ever going to happen. And even if you could, we have no idea what the collective 'state' of that cubic millimeter 'means' or what it contributes to further up stack in terms of circuitry and regional states / behaviors let alone to the whole organ. The result being that all brain and connectome modeling being done or proposed are very weak simulations run on very coarse models / facsimiles compared to the real thing.

In fact, almost by definition, all the current brain research involving modeling and simulations on computers are forced to select an extremely small subset of what's going on at any given level of the brain to model. Those models - whether in hardware as 'neuron chips' or in software on supercomputers - are all way crude representations of some small aspect of what's really going on in a brain.

Again, all that should register a 'duh' somewhere along the way.

Where Epstein is also in the neighborhood of a mark is with the fact the brain is always changing and that 'changing' and those 'changes' are what the brain 'does'. Where he misses the mark in his wholesale dismissal of the IP metaphor (and throws out the baby with the bathwater) is the fact the brain does indeed 'input', 'process', 'store', and 'retrieve' information on which it produces 'outputs' - it just doesn't do it like a computer does which is that same thudding 'duh' again.

In sum, humans make metaphors to help us make sense of and relate to the world around us. The fact that we make the best metaphors we can at any given time should not be denigrated or used as a vehicle to claim how ignorant we are for using them. Each of those failed metaphors in some way informed and helped refine the next up to and including the IP metaphor. And that metaphor still has a lot of currency in that it helps us relate something we don't know much about (the brain) to something we know intimately (computers) - i.e. there has been some utility in all those metaphors in terms of framing and organizing our thoughts and research over time and generations.

And circling back around to the brain's three highly unusual and costly characteristics: complexity, energy budget and shielding / protections, they alone should beg the question of "what's the point" of brains (all brains)? If it were for mere body regulation and control, navigating an environment, and basic reproduction / survival, then lesser engines would suffice as can be seen lower in the taxonomy of extant species. In the case of 'higher' creatures, including humans, I would argue those high costs are paid for consciousness and sentience, neither of which exist without a brain operating within a very specific expensive set of parameters.

So, do we have precisely the right metaphor? Do we understand exactly how brains and consciousness are related? F*#k no. But then it's a journey of evolving metaphors and understanding - which, unsurprisingly, is exactly what brains / minds do.
Dingus McGee

Social climber
Where Safety trumps Leaving No Trace
Jun 18, 2017 - 06:18am PT
MikeL,

Some cognitive scientists, however, have proposed embodied cognitive theories, which argue that cognition arises from empirical (sensations) that the body generates from its interactions with the environment. This applies even to the most abstract conceptualizations like trust, legitimacy, etc.. The theories have been buttressed with empirical research.

I fully agree the mind brain is a flux generator of sensations that arise out of the body network of sensing units. And the brain action never stops. The most abstract thoughts are rooted in the body. Einstein said he could feel the general relativity effect in his muscles when thinking of it.

If brain flux = zero we are dead. Awareness arises at this root level flux [sensations] but it is ever changing as the brain is always feeding back into the body's nerve network[Dimasio]. Awareness of nothing is a higher up shell that closes off sensations -- to where there is none to very little sensing.

And so after saying Consciousness is an Illusion , I now see that was the wrong wording. I will say there are many shells of the awareness experience and these mislead us as to what awareness could be. We have to strip all these shells that our minds create to see how the sensation of awareness arises[such & such is happening in the present]. Philosophy of mind is a very high level shell game that presupposes no work done[or necessary] to evaluate how the mind works.

And yes Largo, awareness is not content but that ever changing brain's flux which we can make statements about.

And of course we know the barber can always shave himself even after have made the statement I shave all those that do not shave themselves. The word models we make have little to do with the ways things really are.



MikeL

Social climber
Southern Arizona
Jun 18, 2017 - 06:55am PT
Bryan,

No. I’m wasn't trying to say anything about thresholds.

I’m saying we’re way far off from fruitfully applying the analogy of a composition of a plane, a lightbulb, or a spoked wheel to consciousness. At least with the items that you indicate we know what “parts” are.

We don’t know ANY of the parts of consciousness. Do you? Brains, yes; consciousness, no.

You also seem to assume that there are parts to begin with. Forget about how parts would go together.

Reductionism is a concept. Do you experience consciousness as a concept? Mental conceptualization has its place. This is not one of them so far. You’d get a mileage out of simple observation rather than jumping to modeling approaches, IMO.

I think you’re making your ideas up as you go along. Try telling us what experience without the content is. We’d have something to chew on then. (The rest is imagination.)


Dingus: . . . the brain is always feeding back into the body's nerve network[Damasio].

Good morning.

Yes, it seems to present an ever-recursive feedback loop that tends to defy a final stable state that can be analyzed. That’s why I referred to mirrors facing mirrors, mises en abyme, a kind of infinity loop, etc. in my previous post to your post. We seem to be sensemaking machines (ala, healyje’s above post)—making sense of our sensemaking and the processes of our sensemaking. It’s a darned rabbit hole that would have made Lewis Carroll smile.

Norbert Weiner (a mathematician and the founder of cybernetics) would find this all amusing and interesting as well, I suppose. Weiner designed tracking devices for WW II anti-aircraft guns (self-regulating systems). The self-reflective nature of consciousness seems to take it to another level, however.

Rather than zeroing in on final equilibriums (as self-regulating systems are designed to do), we spin off into new realms of understanding that seem to present no end. Consciousness does not seem to present anything that looks like convergence, but rather divergence instead. You call it flux. It’s what would seem to make this thread never-ending with no final outcomes.

The very nature of consciousness appears to be its apparent irresolvability. One cannot just pin it down. But that’s what almost everyone is trying to do here.


EDIT: I should add for clarification that the irresolvability of consciousness is what makes it / us transcendent. Human beings seem to be multi-dimensional processes that can fulfill themselves by transcending themselves. This is what will frustrate the efforts of science to say what it is.

Messages 14001 - 14020 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
Return to Forum List
 
Our Guidebooks
spacerCheck 'em out!
SuperTopo Guidebooks

guidebook icon
Try a free sample topo!

 
SuperTopo on the Web

Recent Route Beta