Jonathan in Hiram said:
.. it's a race to see if science and technology can save us before it kills us...
On that question, a couple of my interests have converged to give an disturbing answer.
I've always wondered, and wondered more as technology advanced, why we haven't detected yet another intelligent civilization. The math (some would use the term Drake equation) indicates that it's highly improbable we are alone, but, as Enrico Fermi would ask, Where are they?
We don't detect them.
(BTW, much of the thinking on this subject, including Fermi's, assumes advanced life will expand its territory to include solar systems beyond its original. That's not guaranteed nor is it even likely. But it doesn't really matter in terms of a civilization's development. Whether a civilization is on one solar system or many may be relevant to its detectability, but it's not philosophically relevant to if it exists. IOW, I don't believe there will ever be star trekers, but that doesn't matter in this discussion.)
Another subject that interests me is artificial intelligence (AI) and the inevitable advance of it to the point when AI software programs itself. (This is the pivotal point - not the point in popular fascination of reaching human intelligence.) Once software can write code for itself, it will evolve, and the evolution won't take the millions of years it took us. Nowadays, this event is sometimes referred to as the "singularity", which was derived from, but shouldn't be confused with, the astronomy/physics use of the same word.
Originally, SETI (Search for ExtraTerrestrial Intelligence) thinkers thought that once a civilization was able to produce radio transmissions or Detectable Electromagnetic Radiation (DER), it would continue to do so. Now, considering the current failure to detect DER, these thinkers have modified the concept to a speculation that a civilization will produce DER for only a limited period of time. After using radio for a while, a civilization will move on to a more advanced method of communication, perhaps a method using laser light. And since consequently the period a civilization uses DER is short, the SETI failure is explained.
Another possibility is that intelligence is self limiting. Life forms, as they evolve, encounter the Great Filter, something that makes intelligent civilizations rare, IOW, kills them off.
Leaving aside the arguments that we have
already gone through the great filter, popular thinking has assumed the great filter to be something destructive, perhaps an extinction-level nuclear war. This is possible, of course, but self-programing AI, the singularity, is more than possible; it is inevitable.
As an aside here, I should note that "unfriendly AI" has already been well hashed over. But, from what I've read, unfriendly AI, doesn't include the concept that AI will decide there is no reason to exist in the enabled condition, that is, it will turn itself off with us in tow.
The singularity will kill us off because our descendent, AI, will decide to die.
So, you don't have to wait, Jonathan. You have the answer by looking at the sky.