old_nick Posted December 6, 2011 Report Share Posted December 6, 2011 This is the first of a few philosophy forum threads I have in mind for transhumanist issues. This one is, the clever may have noted, on AGI. AGI is artificial general intelligence, artificial intelligence that has at least a human level of intelligence and emotional depth, if not moreso. I thought it would be fun to discuss here and I's like to lead with a few questions for everybody on it.1)First the use of artificial, should the word artificial be used? Especially given several iterations when it would then be reproducing of its own accord without typically requiring human design or interaction. It seems we're lumping everything into either natural or artificial and that strikes me as a false dichotomy. I prefer the phrase machine intelligence, which is in somewhat common use already. Any thoughts?2)For those who believe in souls and other similar superstitions, do you think a machine intelligence would have a soul? Is there some vital distinction in our wetware computer versus their hardware computer? Could the pray, get saved, need to get saved, perform magic, et cetera? I do not believe in such things myself and make no effort to hide that. But I am genuinely curious about how those who do believe in such things would approach it.3)Marriage. Should "adult" machine intelligence be allowed to wed other machine intelligence? How about humans? If so why? If not why? I personally have no issue with sentient sapient operators capable of informed consent wedding one another be it human and/or machine intelligence. As they are(in our hypothetical questions) capable of intellectual and emotional depth, I see no reason to limit their personal freedom for arbitrary distinctions.These are my questions for now. After a bit I want to do some posts on mind uploading, cloning, and general modifications as well. Anyone interested in those can have at it and message me. Link to comment Share on other sites More sharing options...
Decline Posted December 6, 2011 Report Share Posted December 6, 2011 1) Makes sense to me. I will start using that term tyvm.2) I don't believe in souls either so...3) If these intelligent machines were advanced enough to be considered "members of society" with citizenship, rights and responsibilities then yeah I don't see why marriage should be out of the question.The mind uploading is a very interesting concept to me. Would it function like a clone (a separate but exact copy) or would there be a movement of consciousness from the meat brain to the hardware involved? Link to comment Share on other sites More sharing options...
grateful Posted December 6, 2011 Report Share Posted December 6, 2011 you lost me with artificial intelligence having "emotional depth"; could you elaborate? Link to comment Share on other sites More sharing options...
old_nick Posted December 6, 2011 Author Report Share Posted December 6, 2011 you lost me with artificial intelligence having "emotional depth"; could you elaborate?We're discussing personhood which would arguably include sentience. Otherwise you would be left with an emotionless intelligence(which would have benefits as well as deficits). So I am discussing machine intelligence with an emotional depth at least similar to humans in that our basic emotional range occurs. Happiness, sadness, anger, wonder, et cetera. Many cognitive models see sentience as a required trait for intelligence at our level(or at least a biproduct of it) especially in terms of learning and motivation. So several of the current AGI attempts include at least aspects of emotional depth as goals of their project. Offhand, some OpenCog contributors are doing some interesting things with risk/reward as emotional motivation and actually weighting memory based upon it. Link to comment Share on other sites More sharing options...
old_nick Posted December 6, 2011 Author Report Share Posted December 6, 2011 The mind uploading is a very interesting concept to me. Would it function like a clone (a separate but exact copy) or would there be a movement of consciousness from the meat brain to the hardware involved? It's still quite hypothetical now, though a technical possibility. Let's say, for the sake of argument, there were fully mature mind uploading to the peak of technical possibility. That would allow for exact copying as well as transfer of consciousness. So in such a case you could have multiple copies of yourself, a copy of yourself in meatspace with a copy that lives in an online world, or transfer of yourself to other bodies in meatspace or to and from online worlds. What would interest me in that is the possibilities. For one, it would give us an actual afterlife. A real life possible afterlife to where one could go when their bodies passed. One in which their living relatives could see them. It even opens up the possibility for reincarnation where the dead in their afterlife are brought back to new bodies. Another fun possibility would be a hive mind built of copies of yourself. Imagine 40 copies of you sharing consciousness each living in a different city. Though that sort of technology is a great big long way away, assuming the species lives long enough to attain it.Shorter term, the first generation would likely ride the coattails of a developed machine intelligence using whatever system on which they run to provide an emulator of sorts for the uploaded mind. In which case, it'll most probably be a copy. Link to comment Share on other sites More sharing options...
BrDevon Posted December 6, 2011 Report Share Posted December 6, 2011 1. How about nonorganic intelligence? The distinction being that the entity manifesting the intelligence is not relying upon living cells to do so.2. Although the question reflects a bias, I will go with the gist of the question, and say I do not believe nonorganic entities have souls. Others may disagree.3. At this point, I don't have enough experience with the nonorganically intelligent to form a proper opinion. Personally, I think if they are capable of processes that are on the level of coherent thought, they should be able to determine with what/(insert a pronoun for a nonorganic whom here) they live out their use cycle. Besides, so many computers are currently practicing unsafe interfacing, it would be nice to see a few step up and stop computing "in sin." If they wish to exchange their data with the first computer that comes along, they should at least be required to protect themselves from viruses and other bugs. Can you imagine a health code for nonorganic entities? Link to comment Share on other sites More sharing options...
mark 45 Posted December 6, 2011 Report Share Posted December 6, 2011 i'm seeing commander data and the borg.no i'm not trying to be simplistic or sarcastic.just putting it into terms i can wrap my head around.i want to follow this thread a little further before i add anything else. Link to comment Share on other sites More sharing options...
Bro. Hex Posted December 7, 2011 Report Share Posted December 7, 2011 (edited) 3)Marriage. Should "adult" machine intelligence be allowed to wed other machine intelligence? How about humans? If so why? If not why? What self-respecting machine would even consider marrying a human? Edited December 7, 2011 by Hexalpa Link to comment Share on other sites More sharing options...
To`na Wanagi Posted December 7, 2011 Report Share Posted December 7, 2011 I know some humans who are already "married" to their machines. They prefer it to the real thing. Link to comment Share on other sites More sharing options...
old_nick Posted December 7, 2011 Author Report Share Posted December 7, 2011 1. How about nonorganic intelligence? The distinction being that the entity manifesting the intelligence is not relying upon living cells to do so.This actually brings up the question of "what is life". Contrary to popular conception, there is no singular definition for what life is(typically approached in when life begins in terms of layperson discussions). So while they may not be biologically cellular life, they could still be to a large degree considered life. I do like the sound of inorganic intelligence, though. A great descriptor for electronic lifeforms!2. Although the question reflects a bias, I will go with the gist of the question, and say I do not believe nonorganic entities have souls. Others may disagree.Can I ask why you feel that would exempt them from souls and, in your mind, would that make them worse or better off, not having all that worry over destination and afterlife that accompanies soul beliefs.3. At this point, I don't have enough experience with the nonorganically intelligent to form a proper opinion. Personally, I think if they are capable of processes that are on the level of coherent thought, they should be able to determine with what/(insert a pronoun for a nonorganic whom here) they live out their use cycle. Besides, so many computers are currently practicing unsafe interfacing, it would be nice to see a few step up and stop computing "in sin." If they wish to exchange their data with the first computer that comes along, they should at least be required to protect themselves from viruses and other bugs. Can you imagine a health code for nonorganic entities? I imagine their pickup lines would be incredible. "Do you prefer I perform my sockets interfacing through the big endian or little endian format?" Or, "baby, you spool my posix threads, my stack is thoroughly allocated!"In seriousness, I do think it a rather important question as the rights and liberties to which we allow others is reflexive of us and our own rights and liberties. Personally, I'd rather enjoy seeing human/machine intelligence marriages. I think it would signify a great leap in how we see personhood and as a result how we see people.i'm seeing commander data and the borg.I'm aware that that is a Star Trek reference. But that is really the extent of my knowledge on the series.What self-respecting machine would even consider marrying a human? .I've often asked myself what kind of self-respecting woman would want me. And I've got two of them. Sentience does amuzing things.I know some humans who are already "married" to their machines. They prefer it to the real thing. What real thing? That really touches on why I began this thread. Do you feel a machine intelligence would be less deserving than a human intelligence? If so, why?And an amusing link for those interested. A conference is being held in the coming year addressed at the legal issues surrounding robotics.http://boingboing.net/2011/12/07/we-robot-conference-legal-and.html Link to comment Share on other sites More sharing options...
Bro. Hex Posted December 8, 2011 Report Share Posted December 8, 2011 This actually brings up the question of "what is life". Contrary to popular conception, there is no singular definition for what life is... Do you feel a machine intelligence would be less deserving than a human intelligence?... I think that you have posted some extremely interesting questions, old-nick. I rather enjoy your atypical POV on many aspects of human existence. With regard to your question about machine intelligence being more or less "deserving" than human intelligence, I have to say that I would first need to be convinced that machine intelligence was a reality, or at the very least, a real possibility. At this point in time, I am not so convinced, but I will admit to the "remote possibility". Link to comment Share on other sites More sharing options...
Qryos Posted December 8, 2011 Report Share Posted December 8, 2011 ~ Thank you old-nick { I was beginning to think you didn't have a sense of humor }Machines developing emotion... Yes, programming can mimic reactions to situations, but from what I've personally observed of life forms, that may be difficult to translate into reality.Insects for instance, born generally quite functional, reacting to programming and adapting to the parameters their programming allows. Very efficient obviously.Humans... born unfinished, seemingly 'peeved', anger & pain aparently first emotions... first emotions may be significant. Frustration soon follows. Lots of that.A programmed sentience, would it need the repetition a human does to learn in order to comprehend over-coming frustration and failure? We learn as much from our mistakes as our accomplishments { or are supposed to, such as taking defeat gracefully, being grateful for assistance given, etc. }If a computerized sentience could develop all of a human's anger, pain, embarassment, thanks & such within what... OK, new sentience, give it 3,4 minutes...Why they heck would it want to be around dopey critters like us? ... Seriously, the speed at which these machines process now, by then we'd be dust beneath their wings. Link to comment Share on other sites More sharing options...
old_nick Posted December 8, 2011 Author Report Share Posted December 8, 2011 I think that you have posted some extremely interesting questions, old-nick. I rather enjoy your atypical POV on many aspects of human existence. With regard to your question about machine intelligence being more or less "deserving" than human intelligence, I have to say that I would first need to be convinced that machine intelligence was a reality, or at the very least, a real possibility. At this point in time, I am not so convinced, but I will admit to the "remote possibility".Currently, sentient and sapient AI are not a reality. We haven't created an AI comparable to human intelligence and interaction. There are some AI which do have aspects on the level of humans, or even better than humans. But no full comprehensive general intelligence. As for the possibility of one, every indication is that consciousness is merely a side effect of the processes our brain has evolved such as pattern recognition, action determinism, et cetera. As a result, there is no reason why a fully capable machine intelligence would be impossible. A great project on it that I highly recommend is OpenCog. I'll post a link at the end of this post for any interested.~ Thank you old-nick { I was beginning to think you didn't have a sense of humor }Don't let me fool you. I don't have one.Machines developing emotion... Yes, programming can mimic reactions to situations, but from what I've personally observed of life forms, that may be difficult to translate into reality.Where do you draw a distinction between a biological machine emulating emotion(us) and a mechanical machine emulating emotion? If the machine is feeling and responding to emotional stimuli, how is it merely mimicing emotion? How would it be any different than our own processes of emotional response to stimuli?Insects for instance, born generally quite functional, reacting to programming and adapting to the parameters their programming allows. Very efficient obviously.Partially efficient, and then within the niche to which they have evolved. Evolution is a rather inefficient process as it has no overal concern for efficiency, just functionality. Which can, at times, be a very fine line. But it is definitely there. Just look at the host of atavisms we have.Humans... born unfinished, seemingly 'peeved', anger & pain aparently first emotions... first emotions may be significant. Frustration soon follows. Lots of that.Evolution is an ever-continuing process. There is no "finish". We have humans now. Humans will go on and become transhuman and some day posthuman. It is one of the reasons I support transhumanism as I do. I want to see us become intelligent designers where each individual may decide what is finished for him, her, or it and have access to the technology to make it a reality. The perfect me is not the perfect you. Strength through diversity.A programmed sentience, would it need the repetition a human does to learn in order to comprehend over-coming frustration and failure? The answer is a resounding "depends". There are several proposed models for achieving a hard machine intelligence. Though most of them do indeed require training. At least for the initial generations of the intelligence. There would come a point where they could copy desired aspects of themselves when reproducing.We learn as much from our mistakes as our accomplishments { or are supposed to, such as taking defeat gracefully, being grateful for assistance given, etc. }Yes, but we do so in order to minimize our mistakes.If a computerized sentience could develop all of a human's anger, pain, embarassment, thanks & such within what... OK, new sentience, give it 3,4 minutes...Why they heck would it want to be around dopey critters like us? For the same reasons I hope we'd want to be around it. To grow, learn, evolve, better ourselves, take joy and pleasure in life, to be less alone in our particular portion of the universe. To be peers and co-conspirators in making this world and all worlds better.... Seriously, the speed at which these machines process now, by then we'd be dust beneath their wings.Unless we augment ourselves making use of what we'd learn from our machine friends and how their minds work. I highly recommend looking into Theodore Berger's work on neural prosthesis.http://opencog.org/ For the OpenCog AGI projectAndhttp://www.neural-prosthesis.com/ for a more layperson friendly read on Berger's work and other related researchers' work. Link to comment Share on other sites More sharing options...
mererdog Posted December 8, 2011 Report Share Posted December 8, 2011 (edited) If the machine is feeling and responding to emotional stimuli, how is it merely mimicing emotion? How would it be any different than our own processes of emotional response to stimuli?The difference between "feeling" and "responding to stimuli" is consciousness. If a machine is incapable of consciousness it can only mimic the outward appearances of emotion. Proving (or disproving) something as inherently subjective as consciousness is a tricky proposition, at best.... Edited December 8, 2011 by mererdog Link to comment Share on other sites More sharing options...
old_nick Posted December 8, 2011 Author Report Share Posted December 8, 2011 The difference between "feeling" and "responding to stimuli" is consciousness. If a machine is incapable of consciousness it can only mimic the outward appearances of emotion. Proving (or disproving) something as inherently subjective as consciousness is a tricky proposition, at best....The evidence as it stands indicates that consciousness(or more appropriately the illusion of consciousness) is merely a byproduct of other actions. A simplification of the data we are processing and the mechanisms by which we process it. Perceived consciousness is relatively easy to alter. In some cases even rather specific alterations are achievable. The more we learn, the more we see things such as consciousness and free will are at best illusions cast by those wetware computers in our skulls. Your emotions are merely response to stimuli. They can even be forcefully induced. I've made mention that my significant others and I choose to induce a continuation of that feeling people get when first falling in love. Naturally, that feeling wanes and the love itself changes over time. We've found we have that as well as that primary feeling of the first fall due to our own self-experimentation. We're altering our wetware computers to induce an electrochemical state. That is all emotion is, electrochemical response to a given stimulus. We're controlling that stimulus. Experiencing it on our own terms. Link to comment Share on other sites More sharing options...
mererdog Posted December 8, 2011 Report Share Posted December 8, 2011 The evidence as it stands indicates that consciousness(or more appropriately the illusion of consciousness) is merely a byproduct of other actions.No, it doesn't.The more we learn, the more we see things such as consciousness and free will are at best illusions cast by those wetware computers in our skulls. Just so you know, that statement is really just an admission of a bias on your part.I've made mention that my significant others and I choose to induce a continuation of that feeling people get when first falling in love. Naturally, that feeling wanes and the love itself changes over time. We've found we have that as well as that primary feeling of the first fall due to our own self-experimentation. And without consciousness, you would be unable to "choose" and incapable of "Experiencing". I like pie. Link to comment Share on other sites More sharing options...
old_nick Posted December 8, 2011 Author Report Share Posted December 8, 2011 No, it doesn't.Yes. Yes it does.Bernard Baars. "The conscious access hypothesis: Origins and recent evidence". Trends in Cognitive Sciences 6: 47–52Ezequiel Morsella and John A. Bargh (2007). "Supracortical consciousness: Insights from temporal dynamics, processing-content, and olfaction". Behavioral and Brain Sciences 30: 100.Nichols, T. Grantham (2000). "Adaptive Complexity and Phenomenal Consciousness". Philosophy of Science 67: 648–670. John Eccles (1992). "Evolution of consciousness". Proc. Natl. Acad. Sci. U.S.A. 89: 7320–7324. Bernard Baars (1993). A Cognitive Theory of Consciousness. Cambridge University Press. J. Kevin O'Regan and Alva Noë (2001). Acting out our sensory experience. Behavioral and Brain Sciences, 24 , pp 1011-1021 Francis Crick and Christof Koch(1998) Consciousness and Neuroscience Cerebral Cortex, 8:97-107Francis Crick and Christof Koch(2003) Nature Neuroscience 6, 119 - 126 Mind: introduction to cognitive science By Paul Thagard(2005)The quest for consciousness: a neurobiological approach By Christof Koch(2004)Rodolfo Llinás (2002). I of the Vortex. From Neurons to Self. MIT Press.Just so you know, that statement is really just an admission of a bias on your part.Yes, I concede to being biased towards what the best available evidence indicates.And without consciousness, you would be unable to "choose" and incapable of "Experiencing".Not true on both parts. If you wish to see examples of the first, I suggest you read on choice blindness, inattentional blindness, and generally anything you can find on cognitive faults. For the second, "experience" is merely pattern matching aptitude over accessed memory. Consciousness is not required. Unless you're telling us the neural indexing algorithms used by Google have become self-aware?Further to the point, the link you provided was to a philosopher, not someone involved in neuroscience and the actual study of the biology behind consciousness. Link to comment Share on other sites More sharing options...
Bro. Hex Posted December 8, 2011 Report Share Posted December 8, 2011 (edited) Where do you draw a distinction between a biological machine emulating emotion (us*) (*a homo sapiens subject), and a mechanical machine emulating emotion? An interesting question, indeed.Personally I see very little significant difference between that machine and that sociopath. Edited December 8, 2011 by Hexalpa Link to comment Share on other sites More sharing options...
old_nick Posted December 8, 2011 Author Report Share Posted December 8, 2011 An interesting question, indeed.Personally I see very little significant difference between that machine and that sociopath.Emulation is to match or surpass a given accomplishment. A sociopath doesn't emulate emotions. The word you're looking for is simulate which is an imitation of an action or quality. There is a distinction there and an important one. Link to comment Share on other sites More sharing options...
mererdog Posted December 8, 2011 Report Share Posted December 8, 2011 (edited) Yes. Yes it does.Nope.Not true on both parts. If you wish to see examples of the first, I suggest you read on choice blindness, inattentional blindness, and generally anything you can find on cognitive faults.The simple fact remains that you cannot choose if you are not conscious.For the second, "experience" is merely pattern matching aptitude over accessed memory.Equivocation. When you speak of you experiencing something, you are speaking of the interaction of your consciousness with your environment. A person who is truly unconscious experiences nothing, not even dreams or the passage of time... Further to the point, the link you provided was to a philosopher, not someone involved in neuroscience and the actual study of the biology behind consciousness.If you wish to suggest that a philosopher has nothing to add on this subject, I will be curious as to why you posted all of this in the philosophy section of the forum. Otherwise, I'll have to take that as a fairly lame attempt at well-poisoning. Edited December 8, 2011 by mererdog Link to comment Share on other sites More sharing options...
Recommended Posts