What is "Mind?"

Search
Go

Discussion Topic

Return to Forum List
This thread has been locked
Messages 11501 - 11520 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
eeyonkee

Trad climber
Golden, CO
Nov 28, 2016 - 03:17pm PT
You've got to be kidding with the last post, Largo.
High Fructose Corn Spirit

Gym climber
Nov 28, 2016 - 03:25pm PT
What they mean to say, Silly Rabbit, is that the brain is an amazing perception generator (wiggle that finger in the corner of your eye!) and its outputs (sentience, qualia, consciousness) have illusory qualities re them..

Not much different from when Einstein famously called time an illusion... when it would have been less confusing and way more clear to say time has an "illusory quality" or "illusory component" when perceived by the brain.

...

"What is mind?"

So much talk about meditation, mindfulness and dreams on this thread; and so little talk about evolutionary psychology when the latter provides so much (more) on the "mental life" and its workings (eg, instincts, thoughts and feelings).

This is by and large a thread that plainly illustrates America's science illiteracy manifested in the ST climbers camp.


Here's a thought: Instead of posting why not take one or two formal courses in Evolutionary Psychology - even online thru Stanford or Harvard or Chicago - just to see if they don't provide some fresh insight?

PS

Go-b and others... FYI... Facebook Science is not science. Breitbart Science is not science.
eeyonkee

Trad climber
Golden, CO
Nov 28, 2016 - 03:27pm PT
I think Ed should have more babies, some others fewer.

I don't do the Facebook. This is my only social media experience. While it's been fun in a lot of ways, it has made me less optimistic for the future. Not so much this thread per se, but the sum total of the political/religion/science threads. I used to believe that it would be more like Star Trek.
High Fructose Corn Spirit

Gym climber
Nov 28, 2016 - 03:37pm PT
Yeah, me too.

You should check out the latest Harris podcast featuring Stuart Russell, re AI and consciousness also (2) Humans (UK show, avail on your favorite peer to peer utility).

I think Humans is even better than Westworld for illustrating some practical everyday relations between human and cyborg (synths) insofar as AI and cybernetics were to ever take off. Regarding economics (eg, unemployment), politics, morality issues, meaning purpose and value... those thorny things.

I'd give a million dollars to know how all this civilization unfolds over the next 100 and 1000 years. Sheesh!!

I'm going to hate to leave the Party when my time comes.

...



Technology continuously decreases the need for capital & labor which will concentrate wealth & power. -Nicholas Berggruen
MH2

Boulder climber
Andy Cairns
Nov 28, 2016 - 07:30pm PT
Have to have some fun on this thread or it gets deadly and sinks like a ship's anchor. Poking fun at what we perceive are silly beliefs is part of the bargain.


Part of the fun:

Largo's physics

MikeL's math

Paul's anthropo-cosmology
jgill

Boulder climber
The high prairie of southern Colorado
Nov 28, 2016 - 07:58pm PT
I've been re-reading Owen Glynne Jones's book Rock Climbing in the English Lake District (1900), and Jones, a Physics Master in the London School District by profession, has this to say of a moment prior to a friend suggesting an ascent of Kern Knotts Crack: "I was lying on the billiard table just then thinking of the different kinds of nothing."

Jones (1867-1899) would have fitted right in on this thread.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 28, 2016 - 08:17pm PT
Fruity sez: What they mean to say, Silly Rabbit, is that the brain is an amazing perception generator (wiggle that finger in the corner of your eye!) and its outputs (sentience, qualia, consciousness) have illusory qualities re them.
---


No cigar on this on Fruity. You simply haven't reasoned this through, nor studied enough functionalism (Dennett et al) to even know where those people stand on the subject.

The majority of functionalists deny the existence of qualia. Dennett does not, though he labels qualia, and 1st person experience, an illusion - something many more nuanced thinkers have recognized and clearly illustrated is a nonsense statement.

Dennett fouled his own argument in this regards by admitting that there is nothing more obvious that the apparent reality of our direct experience, the "inner light show" going on inside our heads. But as a functionalist, he is hidebound to the old behavioralist model (now junked) of trying to understand and define humans simply by dint of their observable behavior, as viewed as a third-person phenomenon. Ergo, to this camp, only 3rd person phenomenon meets the criteria of being real.

This leaves Dennett in the impossible position of accepting the experiential verity of 1st person phenomenon, but denying the "reality" of same because it is not a 3rd person phenomenon.

If you were to ask a physicalist what criteria would have to be met for 1st person phenomenon to be "real," their only answer would be that subjective experience would have to be 3rd person observable phenomenon.

At the very least this smacks of a tautology as described by the ancient Greeks - a statement that is true merely by saying the same thing twice. Or a statement that is true solely because of the terms and criteria involved (only the 3rd person is real).

But this is just the surface layer of the insuperable problems with staunch functionalism. The impossibilities arise with you look at the output belief in real world terms, as in Hard AI. That's when the bottom falls out.

It would be interesting to see if someone like eeyonkee can reason out why this is so. I doubt it, but who knows.

And to answer your silly question about why poking your eye produces the illusory signal of light, sans photons - this is not a question about sentience at all. It is a question about content (an illusory light flash), not the active subjective experiencing of that content.

You quite naturally conflate the two because when you start with a machine-registration model of consciousness, content and awareness of content are self same, at least according to your belief (again, in mind studies this is typically called functionalism).

The differences between brain-generated content per external phenomenon, and what that content is when objectively measured or evaluated, is an especially interesting study.

Surprisingly, many so called science-types are prone to conflate internal perceptions of so-called external objects with things that are "out there," believing the two are fundamentally the same. That is, the moon that we perceive and whose qualities we measure actually exists "out there," in basically the same form as we perceive it.

But again, the clencher here is in seeing why Hard AI and the machine output model - at lease the one that goes along with Dennett's beliefs - are incompatible.
MH2

Boulder climber
Andy Cairns
Nov 28, 2016 - 08:35pm PT
But this is just the surface layer of the insuperable problems with staunch functionalism. The impossibilities arise with you look at the output belief in real world terms, as in Hard AI. That's when the bottom falls out.


You are confused. That is the only conclusion.


You are also afraid of bogeymen. There is no AI, Hard or other. There are a variety of approaches to machine learning. It is too soon to say where and how far they may go.

MikeL

Social climber
Southern Arizona
Nov 29, 2016 - 07:52am PT
I can’t say enough about seeing other things than what we’re used to (for many reasons). Seeing perspectively, from a trained point of view, is limiting. This means there is more to reality than one sees or even can see. It seems the way that view can become more inclusive is to simply relax. Don’t grasp.

The less you look for, the more that shows up.

MH2: It is too soon to say where and how far they may go.

That, about everything, eh? It seems it’s too soon to say about anything. The understanding that comes out of that would suggest that one try to avoid taking any notion too seriously or concretely.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 29, 2016 - 10:21am PT
But this is just the surface layer of the insuperable problems with staunch functionalism. The impossibilities arise with you look at the output belief in real world terms, as in Hard AI. That's when the bottom falls out.


You are confused. That is the only conclusion.


You are also afraid of bogeymen. There is no AI, Hard or other. There are a variety of approaches to machine learning. It is too soon to say where and how far they may go.

--------


MH2, you have to go into the corner again for reverting to your compulsion to do what psychologists call "inverting." That is, dodging the questions (how does the bottom fall out of Dennett's Folly when you start asking hard and specific questions) by making nonsense assertions: You are confused. You are afraid of bogeymen. There is no AI, hard or otherwise.

Of course there are millions of pages of commentary and thousands of books on AI, though the terms and parameters vary school to school. A quick look at what MH2 says doesn't exist. He's pulling a kind of Dennett Folly himself, but instead of saying "we only think we have experience," MH2 says we only think there is a subject called "AI." Or that what AI REALLY is, is the study of machine learning. Of course there are many from the Artificial Brain camp who would take issue with MH2 on this account. But first let's look at a brief overview, starting with what is generally called AI-complete.

"In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI."

In this regards, AI machines are posited as nothing more than processing agents, data crunching machines, or super duper Turing rigs. Sentience is not a prominent or even a factor in this work.

However when you traverse sideways into things like the Blue Brain Project, conecived by one of the greatest hustlers and con men in the world - Henry Markham - you get quite another set of goals, parameters and promises. Specifically, that as early as 2030 the Blue Brain Project, with over a billion dollars of funding (much of it from the Swiss government), will have produced a thinking, feeling, talking, loving and fully sentient machine.

This work and these claims have become diversified by many camps and schools, most are versions of the artificial brain movement.

"Artificial brains are man-made machines that are just as intelligent, creative, and self-aware as humans. No such machine has yet been built, but it is only a matter of time. Given current trends in neuroscience, computing, and nanotechnology, we estimate that artificial general intelligence will emerge sometime in the 21st century, maybe even by the year 2050.

"We consider human consciousness to be the most pressing mystery, and yet most within our reach. By reverse engineering the human brain we will come to understand it. By reconstructing and enhancing the brain we will be empowered to push forward our understanding of the universe and to evolve life to the next level."

AI is very much divided as to the possibility of sentient machines, many insisting that sentience is not and should not be a goal of AI, which should remain focused on machine learning. But the assumption that intelligent machines will also be sentient is a given to the majority of people who are raising alarmist warnings about the pending "singularity," or that time in the very near future (they insist) when we build machines that are more intelligent than we are.

"Some of today's top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives. Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us—or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper. In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”

My point in this regards is not concerned with the ongoing debate within AI per sentience - either as a worthy goal or a non-starter - but rather the deep seated belief within the entire community that in their heart of hearts, most would insist that sentient machines are at least theoretically possible, believing as they do in the functionalist or machine output/machine registration model, which at bottom is Dennett's position. As was recently stated in a fine article in Psychology Today,
"Strong A.I., by definition, should possess the full range of human cognitive abilities. This includes self-awareness, sentience, and consciousness, as these are all features of human cognition."

My point in all of this is not to refute the various camps of AI, but rather to investigate what their basic assumptions are, starting with Dennett's basic thesis that sentience is not a first person phenomenon at all, but rather a third person function, and that "we only think we have experience," which in reality is just machine registration that has reached a critical level of complexity.

Using this as a starting point allows us to dive into the subject and pretty quickly, by way of simply asking questions al la the Socratic method, see the bottom fall out of these basic assumption per sentience.
paul roehl

Boulder climber
california
Nov 29, 2016 - 10:49am PT
My point in all of this is not to refute the various camps of AI, but rather to investigate what their basic assumptions are, starting with Dennett's basic thesis that sentience is not a first person phenomenon at all, but rather a third person function, and that "we only think we have experience," which in reality is just machine registration that has reached a critical level of complexity.

The idea that we only think as an illusion takes us back to a descent into solipsism that requires the question who or what is partaking in the illusion? The hard problem remains when we simply ask what is it like to be anything? What is the taste of chocolate? What is any "experience?"
Ed Hartouni

Trad climber
Livermore, CA
Nov 29, 2016 - 01:13pm PT
The idea that we only think as an illusion takes us back to a descent into solipsism that requires the question who or what is partaking in the illusion? The hard problem remains when we simply ask what is it like to be anything? What is the taste of chocolate? What is any "experience?"

why do you have to ask?
paul roehl

Boulder climber
california
Nov 29, 2016 - 01:25pm PT
why do you have to ask?

...because it's there.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 29, 2016 - 01:44pm PT
The idea that we only think as an illusion takes us back to a descent into solipsism that requires the question who or what is partaking in the illusion? The hard problem remains when we simply ask what is it like to be anything? What is the taste of chocolate? What is any "experience?"

why do you have to ask?


Doesn't this arise from the same impulse to wonder about that boson over there?

I consider it fundamental to all inquiring minds to ask ontological questions: What the hell IS that? Followed by functional questions: How does that work? How does this apparent object or phenomenon interact with the rest of reality?
MH2

Boulder climber
Andy Cairns
Nov 29, 2016 - 03:18pm PT
I consider it fundamental to all inquiring minds to ask ontological questions: What the hell IS that? Followed by functional questions: How does that work? How does this apparent object or phenomenon interact with the rest of reality?


I inquire: why do you concern yourself with what you call Hard AI?

If you consider it to be impossible, why do you bother to discuss it?

In what way does it interact with your reality?
Ed Hartouni

Trad climber
Livermore, CA
Nov 29, 2016 - 07:29pm PT
Doesn't this arise from the same impulse to wonder about that boson over there?

perhaps, but I know a lot about bosons, and physical theories... if you are asking me about experiences it isn't at all clear what physical question you are asking, if you are asking one at all.

But the point was, you are asking a question, and that question, if it about my "first person experience yada yada yada" is one I cannot answer, let alone in a way that you would understand.

However, we could agree that your third person account of your experience is similar to my third person account of the experience. Likely, we wouldn't discuss any of the ontological issues (what is experience?) but descriptions of "content."

Your witness compared to my witness.

A scientific description of a boson provides all the information for anyone to go and "see" the boson, the very same boson, and describe its attributes, which everyone "sees" as the same. The prescription for evoking the boson works everytime for everyone.

When a boson behaves differently than expected, we ascribe that to some physical cause, and we study that and eventually understand it, expanding our notions of that particular boson and its interactions with the physical universe.

That is an objective universe. It's not about "witnessing".

The subjective universe is mine alone, you have yours.

When we get together and talk about it, we might agree on the similarities of those two subjective universes, but suddenly this starts to sound "objective" that is something common to the two of us are not ours alone, but some property of us two...

...so we can go and talk to someone else and maybe find commonality, bootstrapping the agreement, and finding those agreements describing a more "objective" thing than was there before.

What parts are left in disagreement could very well be mine alone...


perhaps the "talking" has a lot to do with it.


I have to admit to a fear of getting the apostrophes wrong, what with all the grammar shaming that is coming from some thread participants... and punctuation was never, ever my strong suit.

[Click to View YouTube Video]
MikeL

Social climber
Southern Arizona
Nov 29, 2016 - 07:44pm PT
DMT: There are many things, events, situations where gaining sufficient clarity short of 100% is absolutely vital. There's a tiger in the tall grass logic. Run, NOW!

. . . always with the evolution metaphor that begs the question. Everything gets explained by survival of the fittest. It’s how and why anything happens. *Because* this or that helps improve specie survival, it self-validates by putting effect in front of cause. Ugh.

Try that approach on issues of poverty, world peace, climate change, how to create jobs in a declining economy, health care, etc. Put the effect before the cause. Prove a line of action that you believe is unavoidable. There is never “sufficient clarity” that stipulates vital assessments and decisions. It only seems that way, in your mind.

If you know what “degrees of freedom” are, then you can start looking at what must be impossible to reign in the probabilities for you to say anything. In brief: way too many things can happen / show up / exist. Approximations (instinct, emotional, narrative, and now metric science) are the loosest approximations. “It’s what works!”

There you go. You have explanation on your side, but you can’t say what you’re explaining.

I say there is nothing that needs explaining. Really. It just doesn’t matter.

Ed: if you are asking me about experiences it isn't at all clear what physical question you are asking, if you are asking one at all.

Oh, come on, Ed. You can help a guy out. It’s not his field. You know what he’s saying, don’t you?

(The rest of what you wrote was good for me.)
MH2

Boulder climber
Andy Cairns
Nov 29, 2016 - 07:53pm PT
I say there is nothing that needs explaining.


We agree. Please explain nothing.
Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Nov 29, 2016 - 08:46pm PT
That is an objective universe. It's not about "witnessing".

Name a single external object that has never been witnessed, including witnessing the results of instrumental output for that we cannot witness directly, or implied through the witnessing and measuring of other witnessed phenomenon.

One of the snafus of postulating a supposedly mind-independent world "out there" is the fact that each designated thing was know only through witnessing. An unwitnessed thing is an unknown thing, or is implied or surmised through the witnessing of a related, and witnesses, phenomenon.

Through the belief in so called mind independent objects, or a stand alone objective world, some come to consider mind driven by the same belief, and set up a straw man argument that mind, if "real," should be object independent. Ergo duality.

My sense of it is that the objective and subjective realms are inextricably bound, and it is impossible to render one in strictly the others terms or form. Hard AI or artificial brain technology seeks to somehow translate the 1st person into 3rd person form, then out put it back into the original dual form. This is only a goal based on the belief that the fundamentally, the 3rd person IS a 1st person phenomenon, that awareness in just machine registration with a side order of complexity. When you delve into AI with the hard questions, you start seeing the difficulties.

And MH2 I've been digging into AI for several reason. One, a proper study of mind requires it. Two, it's fascinating. And three, I'm doing a long form writing project involving some hard AI. Here's the set up:

By 2026, most people are listed, hot-linked to the internet.

Implants digitize thoughts and affect, routing them through the worldwide web. All knowledge is instantly accessible, every person a thought away. Speaking becomes optional.

Advances in quantum medicine and artificial intelligence promise immortality within a decade - the Phoenix Point, when an individual's memories and habits can be downloaded into a succession of young clones.

People cling to life. All risk is avoided. Full emersion, virtual reality aps largely replace the dangers of direct experience. Virtual cocaine and acrobatic sex - all you want for $200 a month.

But as "safe and cyber" becomes habit, suicide rates soar - till the FCC authorizes Scenarios, dangerous, sometimes fatal adventure aps enacted by a small, outlaw cast of Actuals who by choice lived unlisted and off line.

The average life span of an Actual is 3 years.




Ed Hartouni

Trad climber
Livermore, CA
Nov 29, 2016 - 08:59pm PT
ah yes, but witnessing is not the same thing as observing or measuring, certainly not used in the same way.

Witnessing brings in a person who sees something happen, and similarly, the testimony of someone who saw something happen... in both cases, the veracity of the witness depends on their reliability, their trustworthiness.

In that sense, "witness" has a lot to do with our experience, and the description of that experience, and our oath that it is truthful.

That's not how a measurement or a observation, at least in science, is made. The veracity of the measurement has to do with the ability to repeat it independently, in an "objective" manner.

Witnessing is subjective... one witnesses miracles, and provides testimony of such, we don't measure miracles, nor do we observe them.

Messages 11501 - 11520 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
Return to Forum List
 
Our Guidebooks
spacerCheck 'em out!
SuperTopo Guidebooks

guidebook icon
Try a free sample topo!

 
SuperTopo on the Web

Recent Route Beta