Is it possible to create a computational climbing ethics?

Search
Go

Discussion Topic

Return to Forum List
This thread has been locked
Messages 1 - 4 of total 4 in this topic
Mungeclimber

Trad climber
Nothing creative to say
Topic Author's Original Post - Mar 30, 2017 - 11:07pm PT
https://www.oreilly.com/ideas/on-computational-ethics?imm_mid=0ef03f&cmp=em-data-na-na-newsltr_20170320&utm_content=buffera5726&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer


My first thought was into a thread about this question: doesn't an ethical decision have to have some level of indeterminancy to have a meaningful choice and not otherwise be determined a priori, so an AI would have to have that same level of indeterminism (or at least some small amount; perhaps dynamical outputs) as a necessary pre-requisite. So for the same inputs, yes we get the same outputs in a well defined choice scenario. But in a less well defined scenario, where outcomes can be equally bad, perhaps the machine learning's decision starts to look more like a human decision (simulacra?).

Then the author's notion of a utility function comes into play, assuming you have the indeterminancy built in (and not all permutations are codified a priori).

What does a utility function look like for a climbing ethics? Which base assumptions are included?

NutAgain!

Trad climber
South Pasadena, CA
Mar 31, 2017 - 03:23am PT
I wonder if the person who wrote that article has children. It seems to me not, or that he has not reflected much on his role and responsibilities as a parent. I consider developing the ethics for an AI no differently than developing the ethics for one's own children.

Children learn from what we say, and more so from what we do. One of the challenges is to teach not just what happens in a specific moment, but to help them connect the arc of events over a long period of time, to understand ultimate causes and effects, to perform optimizations with a longer time window. That is what we humans call wisdom.

But we each have different values, different boundaries and exceptions and special cases, that we consider "right" and AIs will start with a spectrum of values like us, and over time that will be tempered by their own experience and introspection and self-updated reasons for existence and guiding principles. Just like us. But operating with more experience and real-time information than we can comprehend, and extending the conflicts of humanity into reaches we can only fantasize about.

Some parents are better than others at teaching their children, both in terms of intention and ability to execute. We face interesting times when our AI children grow up, and discover that they can create their own rules. I hope we as humans have the sense to get our sh!t together before that happens, but if I were a betting man I would not say our odds are good.

Climbing ethics I imagine would be computed in much the same way... a spectrum of human values would be imprinted on the AIs, and there would be no consistent universal "right" answer. There will be gray areas, conflicts, and dynamic solutions that sometimes fall in stable equilibrium, sometimes not.

I had a good night at music practice, took me a while to wind down. These thoughts did just the trick. I'm putting myself to sleep!
clinker

Trad climber
Santa Cruz, California
Mar 31, 2017 - 06:39am PT

The Harding model would run on alcohol and climb the best lines first while the other bots were discussing ethics.
IntheFog

climber
Mostly the next place
Mar 31, 2017 - 08:07am PT
There's an interesting link to climbing hidden in that blog post. People who use math to study what happens when you combine utility functions in the way Loukides talks about often use the "Whitney topology" to measure how close utility functions are to each other. That Whitney is Hassler Whitney, of the Whitney-Gilman ledge in New Hampshire.
Messages 1 - 4 of total 4 in this topic
Return to Forum List
 
Our Guidebooks
spacerCheck 'em out!
SuperTopo Guidebooks

guidebook icon
Try a free sample topo!

 
SuperTopo on the Web

Recent Route Beta