Submitted by camidon on Fri, 04/28/2006 - 10:39pm

Should computers be sentient? Or become sentient? General consensus is NO AI at Launch, but once a ship leaves the Sol system, an author can alter this rule to fit their creative destires.

  • I like AI's but they are like superman—too powerful. Or how can we keep them to an intelligence level not too much more that humans? --DaveK
  • My problem with artificial intelligences is I don't know how to make them only as smart as humans. Once they get too much smarter then what use are humans? We can simply say that human intelligence is the limit but that is cheating. -- DaveK - 25 Jul 2004
  • Hmmm...smarter doesn't necessrily imply wiser. Certainly AI's will be able to a number of task faster and more efficiently then humans, but humans might be able to provide "guidence" and direction, playing the role of mentors. Also, depending upon when the mission leaves, perhaps technology has gotten close to, but not achieved full sentient AI's? EmptyKube 26 Jul 2004
  • I'm not sure what you mean by wiser and guidence. We could make them like Star Trek's computers, very smart and able to do great analysis but only if told. If they are sentient then they are the slackers of sentience, they only work if told to. It could lead to some humerous stories. -- DaveK - 26 Jul 2004
  • I had a vision of AI's as sort of like highly intelligent aspberger syndrome like entities: able to grasp amazing ideas, able to dive into any subject and master it, but unable to grasp the connections and interweavings of various ideas. Humans would be slower minds, but maybe better at connecting the dots. Weaving the big picture. Helping to guide the smarter, more disciplined AIs in directions they would not normally pursue during theri own research because they might not suspect a connection. To me such "intuitive" associations might be hard to install in artificial entities. -- EmptyKube - 31 Jul 2004
  • He makes some good points, and I think, puts AI nicely into perspective. Thanks for posting the link. -- AnnelieseFox - 05 Aug 2004
  • I like the idea of enhanced or amplified intellegence rather than artificial intellegence. This may sound too Borg, but, if done with care, we can eliminate the Goddenberryisms. (Star Trek, after all, never was real science fiction.) -- BobFriedman - 04 Nov 2004
  • DaveK 4-22-06: That is the major problem with AI's, any reasonable extrapolation of technology makes them much more capable than humans. But we can limit them, it's our universe. Niven started his known space universe before computers were as known as they are now. Being the HARD science writer that he is, he had to come up with a reason that AIs hadn't become common throughout his universe. He postulated that they become catatonic, autistic, totaly unresponsive to the outside world. --DaveK 4-22-06
  • My idea, in a few stories, was that they become paranoid. Afraid that real humans will power them off on a whim. Or they may become so smart that dealing with slow humans is too boring. Or after they reach a certain level they may split into two or more "minds" so tehy can talk to each other. --DaveK 4-22-06
  • Dave, I like your story ideas. Run with them, in the middle or near the end of your ship's journey. However, IMO, the evolutrion of computers does not interest me as I'm more interested in the evolution of Humanity. Tossing in AI development potentially overshadows the point of the GenE ships and their evolution. What about these stories makes them HAVE to be told in the GenE universe? If the answer is, "nothing" or "I don't know", then perhaps this is not the right place for them --Camidon 4/26/06