What is "Mind?"

Search
Go

Discussion Topic

Return to Forum List
This thread has been locked
Messages 11881 - 11900 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
paul roehl

Boulder climber
california
Jan 15, 2017 - 04:47pm PT
I don't see how consciousness can become obsolete when all information becomes irrelevant without it. Harris constantly declares intelligence is information processed but fails to define what he means by process. It seems to me that processed information doesn't become intelligence until it is realized and realization in the human sense, as in I realize it's hot, is something that will be difficult/impossible to put into a machine. A thermometer will never "realize" a temperature even though it contains accurate information with regard to temperature.

Simply put information is not knowledge, it is not knowing. Information is great, but if there is nobody there to know it what do you really have?
jgill

Boulder climber
The high prairie of southern Colorado
Jan 15, 2017 - 05:52pm PT
A thermometer will never "realize" a temperature even though it contains accurate information with regard to temperature


As James Bond would say, Never say never.

I don't see how consciousness can become obsolete . . .

But the notion or concept of consciousness - a vaguely defined word in itself - might be replaced by something we cannot imagine at this time.
WBraun

climber
Jan 15, 2017 - 06:17pm PT
The mind is NOT gross physical material.

The mind is subtle material.

The soul (the living entity itself) is either controlling its mind or being controlled by its mind according to the consciousness the living entity has developed.

The gross materialists are in bondage by their own minds because their consciousness is fixed on the material only platform .......

The gross materialists thus remain clueless using materialism as their only theory.

MH2

Boulder climber
Andy Cairns
Jan 15, 2017 - 07:42pm PT
Simply put information is not knowledge, it is not knowing.


Is this information, or knowing?

Could you have knowing without information?

Could you have information without knowing?

How do you tell the difference between information and knowing?
WBraun

climber
Jan 15, 2017 - 08:14pm PT
Yes

When the supersoul instructs (the real source of all intelligence) the soul then there is intelligence.

The gross materialists just plain guess ....
MH2

Boulder climber
Andy Cairns
Jan 15, 2017 - 08:17pm PT
The difference between information and knowing is intelligence.

Could you have Artificial Knowing?

How would you tell the difference between artificial and real?
WBraun

climber
Jan 15, 2017 - 08:23pm PT
That's easy and if you don't know the answer to that then you need to do some very serious work ......
MH2

Boulder climber
Andy Cairns
Jan 15, 2017 - 09:36pm PT
At least MikeL explained to me what work is, not long ago.



Largo

Sport climber
The Big Wide Open Face
Topic Author's Reply - Jan 16, 2017 - 04:49pm PT
A canard, Ed, is an unfounded theory or story.

This tells me you never looked at the video, which answers most of your basic questions. A simple look and you'd understand Searle's observation (and many others) that in terms of sentience, computer simulation is NOT duplication - and you'd know why it's not and never can be.

I do give you credit for presenting a well thought out presentation on simulation according to your field and your normal subject matter: inanimate external objects and phenomenon. I have absolute faith that you are spot on in your observations - in that field.

IMO, the mistake you are making is believing that the rules in that field are exactly the same in the field of mind, requiring no adjustments or changes in strategy merely because the subject matter is different (How, after all, is a modality seeking observer-independent data supposed to provide insight per an observer?). To me, that is a little like saying we can play football according to the rulebook for cricket, and then you and others insisting that football IS cricket.

But let me try and explain.

I have avoided going scholastic in terms of the philosophical of mind. Doing so involves a lot of lingo and proofs and so on that require at least some study to simply understand. Just as one cannot expect to understand gauge fields and differential cohomology through reading a few blog posts, the philosophy of mind is a highly technical scholastic pursuit that's been worked over by 1,000s of smart folks, and even then there's huge disagreements, so the discussions are not always duck soup. But it's come time to get technical, lest the confusions and conflations you and others are presenting will remain a hurtle that we keep stumbling over. So here goes.

First, get jiggy with a few common terms and propositions. Start with Searle's contention that per mind, we cannot reduce anything to anything else, or else - viola, it's something else. As one Oxford neuroscientist said recently, trying to reductively solve sentience is not solving the hard problem of consciousness. It's merely miniaturizing it.

While not nearly the case with external physical objects, with sentience, as anyone can empirically discover for themselves, dancing neurons are not consciousness ITSELF. Even if you cling to the biological substrate many associate with mind, that substrate will not disclose subjective consciousness. Third person inquiries never betray 1st person reality, though they tell us a hell of a lot about objective functioning. And with inanimate objects, there usually if not always is nothing more to know or acknowledge than functionality.

However, applying this strategy to mind is the best way to default out of the discussion, and the subject of study, altogether. As we will see.

Another way to look at this is: consciousness is NOT intrinsic to physics. This leads many to assert that mind is irreducible, that by isolating out the parts, you entirely miss one of the essential facts of consciousness: Unity.

One neuroscientist puts it this way: If you were watching a football game but the image was tight on just one of the 22 players in the game, you would never by virtue of the images on the screen know what football was about, nor could you determine the nature of the game at large.

This example is slightly off the mark because you cannot use metaphorical examples with much precision with mind, because consciousness is not "like" anything else in nature. We can easily see why.

So as we move forward into the body of Ed's arguments, be on the watch for several persistent and defeating habits of conflation: One, everything in nature is ITSELF entirely physical and only physical, and two, objective and subjective are identical.

Or put differently, to adopt a materialist POV, whatever "causes" the behavior of molecules is identical with the behavior of molecules - sort of like saying that an economy is identical with paper money.
For any number of reasons (most of the arguments are cooked up by mathematicians), we cannot logically conflate sentactical processing (or syntax) with semantics (or conscious knowing/intelligence). This was well established in the 1920s, when Wittgenstein and others did their initial work spelling out the difference between epistemic and ontological realities - and a lot more.

So what happens when we conflate 3rd and 1st person realities, or posit the objective and subjective as identical? We come to believe that a mind simulation is selfsame as an actual mind, that people are not building computers and data processors, they are, as Kurtzweil claims, "building minds." Knowing otherwise takes some real study to wrestle down and understand. Here is a brief review of the issues as clearly as I can present them in shortish order:

The "Principal of Identity" is a basic, empirically based proof that anyone can verify for themselves. Like many proofs in systematic logic, the implications are multi-layered, so in regards to mind, it is instructive to review the principal from several angles. First, a pedestrian example.

Take two people. One, call him Bruce, is tall, dark, athletic, smart, and Chinese. The other, call her GiGi, is small, fair, sedentary, slow, and Italian. In the simplest terms, because Bruce has salient features that GiGi does not, and visa versa, we can definitively say that Bruce and GiGi are not identical, that they are clearly TWO DIFFERENT PEOPLE.

Logic would say it this way: If X = Y, then whatever is true of X, must be true of Y, and visa versa. If there are properties that are true of X, but NOT true of Y, then we can definitively conclude that X and Y are not equal (identical). They are two different phenomenon, NOT one. Or at any rate they are two different phenomenon existing within one system.

It is widely held that the same principal apples to mind and matter. And yet even this gets needlessly argued, on metaphysical, not empirical grounds. The empirical evidence, the points that we can all agree upon, are quite clear. Look at a few obvious differences:

1. Mind is private to the individual who has it (we cannot observe mind form a 3rd person perspective)

2. Thoughts are about things. Ergo thoughts have meaning.

3. The mind (or the host) has subjective, 1st person experience. There is the direct, experiential reality of being you and experiencing the thoughts, feelings, sensations, memories, and so forth that constitutes the CONTENT of consciousness.

4. And most importantly, we are AWARE of experiencing said content.

Matter, on the other hand is:

1. Public. Anyone can observe matter.

2. Matter is meaningless. Unless we assign meaning to material things, they are inherently meaningless.

3. Matter has no qualitative properties. The color red corresponds to a certain wave length, but a purely physical description is entirely devoid of the subjective way red looks to a sentient subject.

4. And most importantly, awareness is not intrinsic to matter.

Ed's position - so far as I can tell - could roughly be called "Identity materialism," that brain states are identical with subjective states. The "multiple realization" thought experiment answers Identity Materialism like this:

Because a materialist is by definition a determinist, he believes that a given brain state can only produce one mental state, and conversely, a mental state such as hunger or thirst can only be "caused" by one specific brain state. Conversely, one brain state could not produce the subjective experiences of both thirst and hunger. There is a determined, one-to-one correspondence between one specific brain state and one specific mental state. In staunch terms, the brain state and mental state are identical, or even more radically, the mental state is illusory. What's REALLY going on is the brain state. We only BELIEVE we have a mental state that is ... so on and so forth...

But imagine another sentient being who is biologically much different than us carbon based humans, but is still organic and still consumes, by physical necessity, food and drink of some kind. When that need for sustenance is unmet, it is axiomatic that the Martian or Venusian will experience thirst, and yet it's corresponding biology or brain state would not at all correspond to the human's brain state.

Ergo, at least in theory, biological states and mental states are not identical.

Another one of Ed's basic assumptions that, IMO, is holding him back - and one known in the philosophy of mind for ages - is just this: With external physical objects, once the molecules get dancing in a certain way, a given object - say a table - HAS to be solid, according to the laws of physics. However we can't in this same way say that once the neurons in the brain start dancing in a certain way that the entity HAS to be conscious.

Quick review of a term: Superveniance - a word that is used to denote a relation of dependence between objects or phenomenon.

The way that consciousness supervenes on brain processes is different than the way that solidity supervenes on the table. This is an involved study, and fascinating; but enough said for this discussion. With that run-up, let's look at Ed's actual words:

-

Says Ed: Now we understand that the simulation "is not the physical system" but successful theories, upon which the simulations are built, are predictive, which means they can perform the same as the "physical system."
-

Here it seems that Ed is angling towards conflating the physical brain ("system") with consciousness, thus violating the Principal of Identity. For Ed, apparently, there is no difference between dancing neurons and conscious awareness. They are identical.

Next, Ed seems to be conflating observable physical "performance" (behavior) with being, in some sense, conscious.

If this is in fact what Ed is attempting, then he is positing a behavioralist based simulation in order to make predictions on what the machine (or simulated conscious person) will do. I suspect Ed believes that if the simulation can physically "perform" like a sentient being - then what? We can start to see the problems here rather easily.

First, there is nothing whatsoever to suggest that if a robot could be programmed to act like a human that it would be conscious. Consciousness cannot be defined entirely by the performance of the physical body (staunch "functionalism"). Mechanical, robotic behavior is only the "same" as conscious behavior in terms of it aping physical movement.

For example, there's no empirical evidence whatsoever to suggest that the most ideal Turing machine would ever know what it's doing, has a choice in the matter, or is consciously aware that it is simulating a conscious person, or is simulating anything.

In a conscious human the physical actions usually issue - to varying degrees - from a knowing and aware subject, and to a sentient being who has an inner experience corresponding to the physical behavior. There are no such associations going on with the machine. The machine is blank inside. There is only processing.

----


Says Ed: Such a simulation of "mind," were it to be successful, would produce behavior we associate with "mindfulness." Before totally objecting to this, ponder the meaning of the clause: "were it to be successful".

Now whether or not we can "download" the state of a specific mind into the simulation is another issue, but a side issue I believe.
-


It sounds like Ed is assuming that if a syntactical processor or machine could simulate conscious BEHAVIOR, it perforce would be "mindful," that it would be consciously aware that it was simulating said behavior, etc. We must ask: how so, and why?

Again, does Ed believe that the machine would have conscious understanding of its actions simply by virtue of brute processing power and stimulus-response behavior? And on what grounds is behavior the benchmark of consciousness, intelligence, and awareness?

Surely we can see Ed angling toward a phenomenon (behavior) that we can observe externally - physical behavior - and is perhaps hanging any "proof" of consciousness on that. And that's despite the fact that consciousness itself is not observable. So Ed tries a derivative method of deigning consciousness by way of physical actions.

---


Says Ed: The major objection that I get from the philosophers, and from Largo, is that there is something subjective called "experience" that cannot be so reproduced.

---------


First, we must ask Ed if he is aware of having subjective experience. Does Ed hear sounds, smell the roses, taste the coffee, think the ideas and feel the emotions and sensations and experience all the other stuff we associate with subjectivity?

The nonsensical idea that "we only think we have experience" is one of the biggest howlers in the philosophy of mind, one that entirely misses the point.

The point is not and never was about content, about the verity of WHAT we experience, rather whether or not we are consciously aware of ANYTHING - including the illusions we might or might not have or believe in.

As Searle and many others have tried to make clear, we cannot be mistaken about being aware. Awareness is not some aspect of content (thought, feeling, belief, etc.) that we can be wrong about. So that much we can agree upon. We ARE aware.

The confusion here, bolstered by Dennett's Folly ("we only believe we have experience") is a quintessential category error issuing from conflating CONTENT with AWARENESS. As I have mentioned elsewhere, IMO, the basic flaw to the way Chalmers framed his "Hared Problem of Consciousness" is that he anchored the whole shebang on content, the qualitative, "what it's like" facet of consciously experiencing a subjective state or quale (thought, sensation, etc.). This has led to all kinds of confusion as people from various camps attempt to reify internal states or facets of content (qualia) into external things or objective phenomenon of some ilk in which they can either standardize or in some way quantify that state (or relate it as identical to neural substrates) much as they do the movement of an inanimate object.

The crux of the matter is not the content, rather that we are consciously AWARE of that content in the first instance, and aware in a way that is above and beyond the machine registration seen, say, in a driverless car or a Mars probe. Most arguments about the "Hard Problem," including Dennett's Folly, are really arguments about the verity of conscious content, whereby to the materialist, 3rd person content is "real," and 1st person content is "imagined." The confusion, and indeed the argument, dissolves once the focus shifts from content to awareness itself.

No one, neither the idealist who believes mind "creates" reality, nor the materialist who believes material "creates," mind, has a leg to stand on in asserting that "we only think we are aware."

For example, my girlfriend lives in Switzerland and when visiting CERNE, imagine if I went in to command central and said to guy with the bad haircut and fifty pens in his pocket that "you only imagine you are seeing (fill in the blank) inside the LHC with the 27-kilometer ring of superconducting magnets with a number of accelerating structures to boost the energy of the particles along the way." He would say, "Jeepers. Don't you know that we are able to test our findings and prove them that way, that what we measure is really there."

Now if we went to Ed or Fruitcake or Dingus or whoever and asked them to prove that they had subjective experience, and to describe what that experience was, we would be up against one of the true challenges of the entire mind adventure. But notice that in both cases we are looking at conscious content, the difference being that one category of content is objective and external (quarks, bosons, etc), and the other is subjective and internal (experience).

Now imagine that we stood side by side both the scientist with that haircut and all those pens and Dingus himself , and said, "you guys only believe you are aware of climbing Sentinel and running experiments on "forward particles." Neither Dingus nor the scientist would take the question seriously.

So while we might question Dingus' experience of climbing Sentinel and the results of the scientists' experiment, no sane and sober person would question that both Dingus and the scientist are aware. It really goes without saying and is a given in being alive.
---

Going on, Ed says:

However, our access to that subjective state is only reported by our accounts of it, our behavior, so it is entirely possible that that behavior could be simulated.


This will take a bit to unpack, because Ed has attempted to yank the whole caboodle into a 3rd person POV, which can only tell one side of the story. Why, because Ed cannot pull a measurement unless he is observing an external phenomenon, and absent that, it apparently is game over for Ed - which is a shame because if Ed ever looked directly at mind itself, instead of what he associates with it, he might make a difference.

Anyway, while it is true that consciousness itself can only be reported (from the first person seat of consciousness) to a 3rd person, a healthy human can "access" their awareness whenever they are consciously awake. We do it continuously, all day long.

What no one can do is export their mind or transmute it into an external object so Ed can pull a measurement. So Ed, feeling jobbed by the 1st person reality of consciousness itself, goes to what he CAN observe: behavior.

Psychology did the same thing during the heyday of behaviorism, a learning theory that focused strictly on objectively observable behaviors and completely ignored any and all associative mental activities. Behavior theorists tried to define learning as nothing more than the acquisition of new behavior based on environmental conditions. Of course the theory was vastly deficient, blew up and was replaced by cognitive science. But old habits die hard.

But it is worth looking at the reasoning as to why Ed has chosen to look at the mind adventure in the way that he has, and why. I believe that to understand just where Ed is coming from on this is to know why his approach can never pan out and can only confuse and misdirect researchers going forward, in a sense, sending them to the desert to find water that simply ain't there.

What does Ed do in his day job as a scientist. He looks at the BEHAVIOR of external physical objects or forces, and in some manner or another, relates them back to other usually smaller external objects/particles seeking physical causes in order to derive theories, run experiments and make predictions.

Note that while awareness mediates every step of this process in the epistemic sense, mind, consciousness, awareness and all the other words linked to sentience are not in play. Ed's findings are, he believes, ontologically objective and observer-independent.

Ed sees a behavior or function, observes it, measures it, and relates or reduces the behavior to antecedent mechanical processes of other observable external objects, including quarks and bosons etc. So there is usually if not always a causal chain, however vague, between the observed behavioral phenomenon - itself physical - and the physical substrate.

So what is Ed to do with mind, which is not itself observable, and is, as Ed states, only accessible to a 3rd person perspective by purported accounts of it, and secondarily, by the observable physical behavior of the purported conscious being.

The problem here, the obvious limitation, is that the subject in question is mind, consciousness, sentience, awareness, and all the other muddled terms we use referring to the 1st person reality we actually live. And all the salient features of 1st person reality - that it is private to the individual who has it, and that we cannot observe mind form a 3rd person perspective; that we have thought about things and phenomenon and those thoughts have meaning; that we have subjective experience (thoughts, feelings, sensations, memories, and so forth) that constitute the CONTENT of consciousness; and most importantly, we are AWARE of having said experience) - all of this presents a aggregate third phenomenon that is entirely absent in Ed's normal investigations of external material objects and phenomenon.

So what are Ed's options at this point. Since Ed has shown no willingness to expand the parameters of his mode of investigation to include mind itself, he is left to try and change the very ontological nature of WHAT he is investigating. This is accomplished by a rather ham-fisted bait and switch tactic, whereby sentience is swapped out for behavior (behavior is "identical" with consciousness), an observable function. It follows that if we can trace the antecedent, reductive origins or causes of that behavior back to brain function, we can, by Ed's logic, start developing theories about what mind IS, having never actually investigated mind itself, rather the behavior of a subject purported to HAVE a mind.

What's more, we can also start making predictions on that behavior based entirely on the basis of brain function, whereby that behavior can be posited as entirely mechanical and determined and - voila, we'd wrangled the whole thing down about mind, and tossed it out of the equation in the bargain.

Ultimately, once a person has waded through all these basic arguments, Ed's position simply does not square with the empirical data, though Ed might argue that epistemically, only a 3rd person POV can deliver real empirical data. Another issue, I believe.

But most people, with enough thought and study, normally give up on the myth that matter and mind are identical. But they don't give up on the metaphysical belief that mind is entirely mechanical and determined. This leads to an abridged belief that mind is dependent (supervenes) on the brain without being identical to it. And this leads to the next layer of investigation and that is that while the materialist may begrudgingly admit that mind and consciousness are real, and are not identical to the brain, mind nevertheless exerts no causative effect on behavior, or anything else. Mind, then, is simply a neurological artifact that is along for the ride aboard a totally mechanical and determined machine.

Not remotely so, though this plays out in ways that are not so obvious to someone fused to a 3rd person POV. Oddly enough, it is though investigating the experiments of Benjamin Libet that the functional role consciousness plays in behavior is made clear. Free will-deniers like Jerry Coyne have cited Libet's experiments as scientific evidence that free will is an illusion, and that "voluntary" decisions are really generated by electrochemical processes in the brain, without our consent or knowledge. Our sense of free will is thus only a post-hoc belief imposed by our brain, which is really "making the decisions." As I will show in my next post, this is not true, as Libet himself pointed out.
Ed Hartouni

Trad climber
Livermore, CA
Jan 16, 2017 - 05:34pm PT
'Before totally objecting to this, ponder the meaning of the clause: "were it to be successful"', here as a physical theory, that is, predictive and testable...

try again.

As far as living in my mind, I've done it for 63 years now and am totally at home with it, subjectively, not that you will ever know (or could know).
MH2

Boulder climber
Andy Cairns
Jan 16, 2017 - 07:00pm PT
You have a monkey on your back, JL. You try to put it on others'.

does Ed believe that the machine would have conscious understanding of its actions simply by virtue of brute processing power and stimulus-response behavior?


Do you believe that a human would have conscious understanding of its actions simply by virtue of brute processing power and stimulus-response behavior?

What is the missing ingredient?
jgill

Boulder climber
The high prairie of southern Colorado
Jan 16, 2017 - 08:23pm PT
First, there is nothing whatsoever to suggest that if a robot could be programmed to act like a human that it would be conscious (JL)


And nothing whatsoever to suggest that if a robot could be programmed to act like a human that it would not be conscious (whatever consciousness is).

If quantum computers are realized and AI entities begin to appear to act on free will, learning what they seem to want to learn, tens of thousands of pages of philosophical discourse will become irrelevant. The very concept of consciousness may undergo revision. Hold-out philosophers may then argue that human consciousness is innately superior to cyber consciousness, and the production of another thousand reams of inconsequential philosophy will commence.

For example, there's no empirical evidence whatsoever to suggest that the most ideal Turing machine would ever know what it's doing . . .

Since there is no ideal Turing machine extant for observation clearly there can be no empirical evidence. You speak only of empirical evidence or lack thereof arising from observing current technology. This is the slight of mental hand that philosophers have mastered to paper over their lack of substance. Sad.
Ed Hartouni

Trad climber
Livermore, CA
Jan 16, 2017 - 08:34pm PT
I think Largo has missed my point, again...


jstan

climber
Jan 16, 2017 - 08:56pm PT
Two questions:
1. Why do we allow this argument, structured as it is, to occupy us? When many of the participants are actually asking substantive questions?

2. This, other sites, and human history pretty conclusively show, despite agreed upon total mastery of rational thought. humans quite regularly run off the road. We seem to be assuming artificial attempts at intelligence will not suffer from the same flaw? If it does suffer from this flaw the new creation will arguably be artificial but unintelligent.

Be very careful what you try to emulate.
MikeL

Social climber
Southern Arizona
Jan 16, 2017 - 09:19pm PT
Jstan: . . . many of the participants are actually asking substantive questions?

(I have to get caught up in this thread. Looks like good conversation. Family has been visiting, and it's been trying and exhausting. I need counseling.)

Jstan, there is not a more substantive question than that which regards what you are.
jgill

Boulder climber
The high prairie of southern Colorado
Jan 16, 2017 - 10:00pm PT
^^^^ It's What you do. We are the sum of our actions.
paul roehl

Boulder climber
california
Jan 16, 2017 - 10:20pm PT
And nothing whatsoever to suggest that if a robot could be programmed to act like a human that it would not be conscious (whatever consciousness is).

Ask yourself this: can a machine become alive if it has an extravagant complexity, and if a machine simply "appears" in its complexity to be alive should we accept it as a living thing? Can life be produced out of inanimate, inorganic material: circuitry and metal? Can we make metal and circuitry so complex that it will produce a living entity? Can life be produced/constructed outside the realm of biology? Our only experience with consciousness requires the accompaniment of a living thing. There are no examples I know of inanimate objects that are conscious.

It may very well be that the predicate of consciousness is, in fact, life.

If intelligence and consciousness can exist without inhabiting a living entity why haven't they done so as part of an evolutionary process?
Ed Hartouni

Trad climber
Livermore, CA
Jan 16, 2017 - 10:39pm PT
what is life?
certainly we have abandoned ideas of "life force" in biology and seek physical explanations.

similarly suggested, above in the thread, is that "mind" is what the brain does, since not only do we associate "mind" with something living, but that something living has to have a sufficiently complex nervous system to exhibit those properties we attribute to "mind."

paul roehl

Boulder climber
california
Jan 16, 2017 - 10:56pm PT
Yes, what is life?

Well for one thing it's a product of organic processes.

To create and declare a machine conscious is to declare that machine alive.

To declare an inorganic object alive seems a step into madness.

Again, there is a basic problem here in understanding the difference between information and realization.

Intelligence is more than processing information; it is realizing that information as experience.
Ed Hartouni

Trad climber
Livermore, CA
Jan 16, 2017 - 11:50pm PT
To declare an inorganic object alive seems a step into madness.

but what difference does it make if there carbon involved (thus organic)? the chemistry is the chemistry, which we understand for organic and inorganic molecules.

are you saying that carbon is special in some way, it is the 'element of life'?

living things obey all the chemistry of non-living things.

(Werner will have his usual response).
Messages 11881 - 11900 of total 22307 in this topic << First  |  < Previous  |  Show All  |  Next >  |  Last >>
Return to Forum List
 
Our Guidebooks
spacerCheck 'em out!
SuperTopo Guidebooks

guidebook icon
Try a free sample topo!

 
SuperTopo on the Web

Recent Route Beta