top of page
The Lights of Guldenberg: Bio
Substack Button.png
The Lights of Guldenberg: Image

Listen Here

The Lights of Guldenberg: HTML Embed
Support Button.jpg
The Lights of Guldenberg: Image

Author's Commentary: "The Lights of Guldenberg"

Alien environments should be a deep well for the fiction writer. There are so many great sci-fi stories that challenge the imagination with the possibilities a trillion different planets orbiting a trillion different stars, making for a theoretically limitless canvas to build a setting on. And that’s probably a big enough challenge in its own right. I decided to pile another challenge on top of that this week by making my two characters technological beings who project their consciousness to the moon they end up exploring. So it presented a doubly difficult challenge.


Truth be told, the setting challenge, I think, is the easier challenge. Plot intersects with setting in very useful ways by placing constraints on the characters—usually physical constraints (think dying of thirst or being eaten by spiceworms on Arrakis or dealing with massive tidal waves on that Inception planet). And there are definitely plot constraints like that in this week’s story.


I wasn’t expecting such a challenge with the AI characters. This is a problem that I’m not sure has been fully fleshed-out in sci-fi yet. How does the writer get the reader to care about the AI in the way they would a human character? Specifically with respect to “death” or “danger” elements of plot when there’s theoretically no danger or death, because the character isn’t really alive to begin with? I’m thinking of robot character “deaths” in sci-fi, but these are usually physical units that get destroyed—think Commander Data getting sucked out to space and destroyed. And to the extent that there’s real pathos there, the character is usually highly anthropomorphized and the sadness for the artificial being’s loss is usually at least partly carried by the emotional reaction of the human characters affected by its loss. But if we really think about what future beings will be like if AI become sentient, the type of sadness for their bodies might be the equivalent of us losing a favorite shirt? The problem for the writer, in this case, is how do you get the reader to care about that? What are the stakes for such a being? After all, this seems a much more logical future form for AI than R2D2, who can get blown up (and repaired within limits). But a Siri or Alexa run on a smartphone, smart speaker, desktop, and in the future, likely will power a walking, talking body (or robodog). Is anthropomorphizing the only way (No. 5 is alive!)? Rowe needs input.


Ultimately, the reader has to care. And human buttons need to get pushed. Otherwise the story will have no oomph.


I definitely tried to make the setting cool, and this week’s story was my solution for the above problem, at least in this one case. Kinda, sorta anthropomorphized a bit, but not entirely. Fun for me to grapple with, hopefully for the reader as well.


Rowe

The Lights of Guldenberg: Text
bottom of page