Singularity Sky didn't make that much of an impression on me, except for the fact that I could finally say that I'd read a Stross book and I still wasn't sure that I understood what all of the fuss was about. The Atrocity Archives made a much bigger impression, but the writing was a bit more "first bookish," which is especially odd considering that it was his third or fourth novel, according to Wikipedia.
Every once and a while I read a book that shows me the future of humanity and Acellerando was one of those novels. Holding this book in one hand and The Foundation Trilogy in the other (I've got an omnibus), I can say that the future will probably not contain a twelve thousand year long human empire that remains at a generally static technological level within the realm of our understanding. The future of the human race is much more likely to look like what Stross postulates than what Asimov does.
Technology, especially computers, will continue to advance, and this was the first book that I've seen that illustrates the future of computer technology in a way that I think shows glimpses of what the future will be like.
(Some small spoilers follow from this point).
Granted, there are a few things that left me shaking my head. For example, when Manifred loses his glasses and can't remember his own name, I felt jolted by disbelief. He must have something running on his wetware, especially in the unintegrated state at which he existed at the time.
My mind absolutely refused to believe that he could even have a conscious internal monologue without having something to reference himself with. That's basic operational information, and it should have been drilled into him before he began to go fully wired. Does he keep his glasses on during sex, and if not, how does he remember what he's supposed to do? Was there ever a period where as a child he had to exist without his technical assistance?
Of course, in the same situation, the glasses go on and try to complete his business without him, something that I also felt was unbelievable. Why should he computer equipment be able to make response decisions without him before the advent of Turing compliant AIs?
Next, when switching between the real world and the virtual mind spaces, the mind spaces are amazingly benign. The programming environments are insanely complex, but the people running as software in them seem to treat them as just as permanent as the real world. I think it would have been interesting to see a fatal exception occur at some point. Perhaps in the government of the Ring Imperium. If there ever was a government that couldn't stop for a reboot, it would have been that one.
His current book is Halting State though. Maybe he goes into those issues more deeply in that novel.
Also, as far as I can tell, at some point the majority of everything is running on RAM, and it doesn't seem like people are saving nearly often enough.
In the midst of these massively complicated virtual environments, there were so few invisible software dangers. Once someone's completely a virtualization, couldn't they be infected in the same way as a computer program today? There is a mention of religion as an infectious meme, but after the uploaded age spam and spyware seems to stop. I think that our experience with the computers that we use today has shown us that there's nothing resembling a perfect computer security system.
Think about how a self aware spam might act. For a completely virtual person, an infection might make you actually desire to buy the product at an emotional or root user level, or hand over all of your account details and then authenticate the transaction. Or you could kill one of the unique ghosts with a well placed exploit.
It would have been interesting to me if more virtual snooping had happened through scanning the virutalized minds of the characters. Aineko shouldn't have had to have modeled people to understand them: it should have been able to evaluate their mental state from reviewing the back end of the system. It should have been able to read minds.
Furthermore, people seem to just accept that the people around them are who they say they are. Amber is always Amber, Mannie is always Mannie, and Sirhan is always Sirhan. When one of the characters talks to another one of the characters, especially in the simulated environments, they don't seem to really worry that the people that they're talking to are really the people that they want to be talking to. Identity is taken at face value, even for the new manifestations of the dead.
This seems oddly trusting, considering that just about anyone can build a body from scratch to look like just about anyone they want. They can't possibly have unique DNA encoded identifiers at that point, because that's just copyable and transferable information.
Why should Amber have to worry about the debts of the Ring Imperium? The only person that held the keys to that entity was dead. Hell, even today we can claim that charges on a credit card were falsely made. I can't even imagine the problem with identity authorization in a completely virtualized society where multiple copies or mimicked copies can exist.
Heck, existing as a virtual simulation seems to to just invite self revisionism. Is it still my charge if the part of me that ordered it no longer exists? I might not even remember it if I've purged my memory along with the money spending bits.
True, the characters in the book are the super-ultra-conservatives in regard to self revisionism (and I can't say that I wouldn't be one of them, either) but it still seems odd that the can't hack their own genome and personality by little Manni Jr.'s time. Especially as they go through so many bodies.
One last thing about the virtual spaces: Why force everyone to use the same context? With that much processing power sure they could all exist comfortably in whatever environments that make them comfortable and still communicate or interact. Why wear chaffing pants? Why not make the context user specific? While Amber wears the pointless and constrictive royal garb, why can't I see things as my own little private garden of paradise where I can wear robes of silk or nothing at all at my own choosing?
Finally, there is the concept of the Matroishka Brain. I don't like the Matroishka Brain. It seems stupid to reject the physical reality, although I understand the drive of the non-human intelligences to expand the computational power until it utilizes every molecule in a solar system. The ultimate housing development, as it were.
I think that I'm caught up in the conservative notion that planets should last forever. I like the cool concept of the empty space and the gigantic livable spheres floating around.
So, while I understand the Maroishka Brain concept, I don't want to see my solar system destroyed. There's all that other matter over there in Alpha Proxima. I think they should go eat that first.
Of course, in my universe, the manipulation of large quantities of matter over long distances isn't nearly as much of a problem as it is in Stross' universe. After all, the addition of multiple kinds of FTL solves so many complicated problems in terms of resource allocation. It isn't worth breaking down the planet Mercury for matter when you can simply import more than you need in a more usable form, whatever that form is.
I mentioned up at the top of this post that Stross convinced me that the singularity isn't going to happen, and I should explain that comment.
I don't think that there is an exponential rate of technological progress. It's such a hard thing to judge, and I do think that we're on a consistently upward trend, but I see no evidence that it's better than linear, especially with our now firm grasp of scientific principles, except as determined by population size.
I also don't think that we're coming up with that many revolutionary concepts. Instead, the vast majority of technological innovation seems to be stemming from our basic research into the world around us as determined by the forces that we already understand. I do suspect that there are entire levels of understanding that we'll eventually advance to the will allow us to do things that right now we can only dream about.
To me, that's the actual potential for singularity: a change in the rate of paradigm revisionism in regard to how we view the universe.
Otherwise, I think that humanity will remain basically the same with increasingly complicated technology surrounding them.