There’s just no other way to put it. The part of him that went through the trouble of writing up theses ages ago could probably come up with a theory or two on what’s really going on, but he tells it to shut up. The part of him that took drugs at about the same time in the past has some things to say as well, but at no point can Alan remember hallucinating a meeting with another self, so all experience is moot.
This is real.
Aw man. I already gushed all over this earlier, but I’ll do it again here. I love the reversal on the common assumption of Tron’s reverence for Alan, and of COURSE it would be Flynn’s fault (and though I joke about that, it was so bittersweet, that it really IS Flynn’s fault). It cracked me up so bad when Tron said that maybe he should calibrate Alan to HIM, and the line about wearing one’s soul on one’s back …
But man. What still grabs me the most is the line about Tron being at PEACE. I can’t believe how ODD that thought is, and not just because of his particular history, but he’s a security program - by definition, he’s always on the front lines. It seems like such an amazing and touching thought, and it’s the greatest gift I could imagine Alan giving him.
Attendant warnings and disclaimers may be found on the fic page.
It wasn’t so much that hope died, but that Sam realized it had only been wishful thinking all along.
For Winzler, and the prompt (misinterpreted):
The world ends. Nuclear wasteland, Mad Max style, etc. One day Sam comes out of the computer and everything is gone. The power will run out soon/the arcade is in danger/etc so Sam hurriedly brings a recovering Tron(zler) out to save him.
Tron is OK at first but slowly reverts to Rinzler under the stress of survival — and Sam eventually begins to lose it as well. In the end we’re left with 2 bugfuck crazy survivalist murdermachines roaming the wasteland together.
Those of you who RP, do me a favor and respond to this post with the one character (and their source) that you feel either represents you the most, or is the closest to your heart.
:D~ /and this suddenly became a picspam
There’ve been a number of RP characters whom I’ve carried close to my heart over the years, but the main one for the whole of this year, the one who best represents me and is closest to my heart, is Ram from original!Tron.
In Dan Shor’s words: “The great part of being RAM was simply his sincere desire to be of assistance to others. He was sort like a Golden Retriever who’s sole purpose in life is to please. He had unabashed and unconditional affection for Flynn and for Tron.” That wasn’t all, though. Besides his openness and loyalty, besides his selfless wish to help wherever he could, he was talented, pragmatic, hard-working, adaptable, innovative, and uncomplicated in a way that was serene rather than naive.
He survived the Game Grid for 200 microcycles. He knew Tron’s lightcycle tactics well enough to pull off a really dangerous tag-team double block against that one last bad guy at nothing more than a word — and to be trusted to do so.
..Seriously, look at that tiny space he got his lightcycle into.
His was the red lightcycle, the one that usually ended up last in the chase, and thus the one doing all those hair-raising evasive maneuvers like dodging directly between two tanks and getting away clean.
And he continually seemed to be enjoying the heck out of his driving skillz (which, by the way, is a trait shared by many of Dan Shor’s characters: see Black Moon Rising and Bill and Ted’s Excellent Adventure, for example~~).
He greeted new conscripts when they showed up in the holding cells, walked the line between calming their initial panic and being frank about the danger they were in, and befriended them even though most of them probably weren’t going to last very long.
He was an actuarial program. He was written to do math and take names, and yet he adapted well enough to survive in the Game Grid for as long as he did: longer than Tron had even been operational, if he had time to actually do some work at that big insurance company before being appropriated.
He wasn’t one to sit still or let his dire situation paralyze him. In his cell, he practiced his fighting moves all the time — and wasn’t averse to showing off, bless ‘im — and still spoke fondly of his previous life helping Users plan for their future needs.
He had the insight to recognize that Flynn was different, even though Flynn had honestly not made a very dignified start to his Game Grid career, what with walking into a forcefield and all.~ And when Flynn did survive his first match, Ram was thrilled to see that he’d made it. Srsly people just look at that face.~~~~~
He also wouldn’t let Tron brood, keeping up a conversation without being confrontational even when Tron was all ~gloom&doom~.
For all that, though (and I am full of meta on this point), he relied on Tron’s faith and strength when his own hope wavered.
He believed in the Users, he’d worked with the Users, but Tron was the leader he would have followed to the ends of the system if need be.
Which… basically he did. They’ve just escaped, and his friends are going after the MCP by themselves? He doesn’t say “welp, it’s been fun, but I’d rather leave and hide and maybe survive the movie”. He chews it over for a few moments in silence while Tron talks about the IO tower, and then follows with a smile and without a second thought.
He was a program who enjoyed whatever he could in the difficult life the conscripts had. Memories, skills, friendships, the thrill of the escape… when they find the power spring, he glances over at Tron before imbibing; part of his enjoyment of this windfall comes from his friends enjoying it too.
He was the only program shown to use a disc for something other than fighting. And he even let Flynn borrow it.~
Ram also had a knack for User idiom that must’ve confused the bits out of Tron. XD~ He yelled “So long, sucker!” at the lightcycle he and Tron trapped, called the Recognizers ‘demons’, and told Flynn to ‘put a cork in it’ at the power spring. I’ve had a lot of fun with his speech patterns in rp. :D
And what happened to him was tragic.
His last words, while holding the hands of an actual real User who’d somehow wound up inside the system and was right there with him? His last words weren’t “help me.” They were “help Tron.”
And they didn’t even rerezz him for Legacy. So obviously I had to do it myself.~
Ram, ladies and gentlemen, programs and Users as Eckert would say. He’s the program who’s been my name and face since Legacy came out and I picked him up, and though I’ve brought in other characters after him, he’s definitely the one closest to my heart. Represent. \o/
You know, I never had any impression of Ram at all except as a sort of fluffy Tron sidekick. But you have just given me a whole new appreciation for the program that has some really serious kick - I may actually try my hand at including some Ram in fics in the future now! Thank you! That was a really great analysis.
So I woke up feeling sick today. not THAT sick, a little sick, but FUCK THAT. Called in sick (unpaid) went to the grocery and got orange juice, Airborne, and Theraflu. As soon as I’m done with this breakfast shake I plan to sleep for 5 hours. I am NOT getting sick!
Oh man you have all my sympathy. I am SICK SICK SICK and I have SO MUCH NEVERENDING CLEANING and I must Yuletide. I wanna Yuletide dammit, not be sick or have to clean all the gorram things.
Me too, guys. Me too. ;____;
*piles blankets on all of us T_T*
Holy cow, it’s an epidemic O.O *huuuuuuuuuuugs you all* Sorry to hear about all the yuckiness! I had my own bout of it earlier this month. *flings chicken soup at you all*
Rather, Yudkowsky’s Friendliness Theory relates, through the fields of biopsychology, that if a truly intelligent mind feels motivated to carry out some function, the result of which would violate some constraint imposed against it, then given enough time and resources, it will develop methods of defeating all such constraints (as humans have done repeatedly throughout the history of technological civilization). Therefore, the appropriate response to the threat posed by such intelligence, is to attempt to ensure that such intelligent minds specifically feel motivated to not harm other intelligent minds (in any sense of the word “harm”), and to that end will deploy their resources towards devising better methods of keeping them from harm. In this scenario, an AI would be free to murder, injure, or enslave a human being, but it would strongly desire not to do so and would only do so if it judged, according to that same desire, that some vastly greater good to that human or to human beings in general would result (though this particular idea is explored in Asimov’s Robot Series stories, via the Zeroth Law). Therefore, an AI designed with Friendliness safeguards would do everything in its power to ensure humans do not come to “harm”, and to ensure that any other AIs that are built would also want humans not to come to harm, and to ensure that any upgraded or modified AIs, whether itself or others, would also never want humans to come to harm - it would try to minimize the harm done to all intelligent minds in perpetuity. As Yudkowsky puts it:
“Gandhi does not want to commit murder, and does not want to modify himself to commit murder.”
(Yes, just like in my other re-blog, I think this too is potentially an interesting topic and resource for writers looking to tackle future scenarios!)
I thought this article would be a nice complement to the one mentioned in my last re-blog on Drone Ethics. I had been reading up on FAI for completely different reasons just a day or two before, but it seems to me that quite a bit of the ethical questions tied up with robot usage also relates to the AIs which drive them. While FAI is dealing with something far more advanced than drones - where the AI is capable of experiencing such complex things as “motivation” - nevertheless, when one even begins to contemplate scenarios where robots are able to make certain command decisions in the field, one will have to consider what sort of AI will drive them and how it will decide between its potential actions.
Robots are replacing humans on the battlefield—but could they also be used to interrogate and torture suspects? This would avoid a serious ethical conflict between physicians’ duty to do no harm, or nonmaleficence, and their questionable role in monitoring vital signs and health of the interrogated. A robot, on the other hand, wouldn’t be bound by the Hippocratic oath, though its very existence creates new dilemmas of its own.
Robots wouldn’t act with malice or hatred or other emotions that may lead to war crimes and other abuses, such as rape. They’re unaffected by emotion and adrenaline and hunger. They’re immune to sleep deprivation, low morale, fatigue, etc. that would cloud our judgment. They can see through the “fog of war”, to reduce unlawful and accidental killings. And they can be objective, unblinking observers to ensure ethical conduct in wartime. So robots can do many of our jobs better than we can, and maybe even act more ethically, at least in the high-stress environment of war.
This is the first paragraph of an article in the Atlantic about the ethical dilemmas around using robots or drones for military purposes. Important read!
(Though the reblog quote says it is the first paragraph of the article, I added another paragraph from a little later on for balance and to indicate that the article is quite thorough in treating all aspects of robot utility, both the good and bad.)
It is a little lengthy for casual reading (I’m only halfway through it right now myself, but will have to finish it tomorrow as I have an early morning conference call to wake up for), but beyond the very important and insightful points it targets, I had the odd thought that it is also an excellent resource for any writer looking to tackle future scenarios, from the near-term to the far. Even if a speculative fiction writer does not invent full-fledged sentient or fully-autonomous robots for their future, there is no doubt that we will become an ever-increasingly automated society, and this is a great peek into all the difficult questions that arise on all levels - in international agreements and laws, in national level politics and economics, in society and individual psychology, etc.
Each point is clear, pithy, and succinct. I won’t say it’s easy reading due to the subject matter addressed (and the extra thinking it engenders), but it is very well written and easy to digest.
“One downside of the e-social revolution is that if all this ubiquitous interactivity leads people to shape their own opinions more and more based on the opinions of others, then we will be thinning out the “intellectual gene pool” of ideas and diverse thinking, and unintentionally putting ourselves and our culture at immense risk of catastrophic loss, either through miscalculation or simply a stampede of sentiment.”—
Paul Higgins: By re-blogging this am I adding to the problem?
Am I now contributing to the problem too by continuing the reblogging chain? But hopefully the article will tickle enough brain cells into pondering the problem and - hopefully - come up with interesting and ‘uncorrelated’ opinions/solutions before sharing!