I suppose thousands of perfect orbital slingshots could get an object to some fraction of it.
This got me to wondering if there is knowledge that's forever trapped in nescience simply because the hypothetical means of acquiring it and the potential application of it are simply too negative?
Depends, if there are infinite dimensions then any changes you make will just branch off into another universe. The new universe will have 2 of you, and you'll never be able to return to your original timeline. Not that dangerous in that scenario.Realistically no. But hypothetically time travel would be way too dangerous to mess with.
Fermi Paradaox says hi
Or doesn't, as it were
As far as I remember, nothing is infinite. As long as we keep growing indefinitely, we would need more and more energy, more and more space, more and more resources. After a certain amount of time humans would eventually reach a critical state where there wouldn't be anything left, since I don't recall the possibility of energy being created from nothing. Thus, we would end the universe as we know it.
Obviously, this is just a layman's opinion on the subject.
Well
a) Immortality is widespread - complete resource depletion quickly ensues as the human race is unable to reach an equilibrium of growth.
b) Immortality is restricted - a social divide would form, and wars would surely follow. 'Who decides who gets to live forever?'. The best case resolution of such wars would be a).
Is there a c?
You seem to think we're special. That we can objectively look at where we are and make decisions on a global scale for the benefit of humanity. News flash: we aren't and we can't. Modern humanity is on a crash course with destruction. Just like every human civilisation before it.‪
Shouldn't the conclusion for that then be that if all other advanced species have destroyed themselves, we should also stop advancing because that would let us more likely to keep on going?
Or if we think we should try to advance because we might be able to go further, then we should believe some other civilizations have been able to do that too. I mean, we can't be the only ones who will be able to go past that. Too many billions of years have passed for just us of all other civilizations in history suddenly being able to avoid that fate.
If you haven't already check out Dan Carlin's podcast on the Cold War.Last year I read 'American Prometheus,' the biography of Robert Oppenheimer, and I really thought the Myth of Prometheus was such a great parable to Oppenheimer and the scientists on the Manhattan project. Nearly all of them were conflicted about the project, and most went onto become ardent opponents of future nuclear development. For his opposition to nuclear development in the 1950s, Oppenheimer himself was attacked and discredited as being a Communist, despite literally doing more to end World War II than any other American.
I'm exploring worst case scenarios because we're talking about the hard stop line. Going past that line is the only point where those worst cases become an issue.
I think your methodology is wrong. For instance if I explore a worst-case scenario for medical research, I might come up with artificially created epidemics as the worst case. But that wouldn't tell us whether medical research is a scientific field we absolutely shouldn't pursue (obviously it isn't).
If you want to find knowledge we absolutely should not have, find a field in which the _best_ case scenario is completely unacceptable. In fact, it's difficult to find such a class of knowledge using this strict methodology.
The worst-case scenario of medical research going wrong isn't your goal (unless you were making biological weapons or something like that)
Why not? What could we do about it even if we did know?
You seem to think we're special. That we can objectively look at where we are and make decisions on a global scale for the benefit of humanity. News flash: we aren't and we can't. Modern humanity is on a crash course with destruction. Just like every human civilisation before it.
It's the bell curve of civilisation. There's a blink-length golden age - one we're living through right now - then everything falls apart for one reason or another.
To reach space you need to be at a stage where you can generate massive power. All probability indicates that power will probably be used for weapons first. The same will go for any other race, terrestrial or no. We're not special, they're not either.
That's before we even look at the likelihood of surviving space colonisation or long term space travel. Before we even factor in how long it takes to get to that stage and that the environment will probably change.
There's more to the fermi paradox than just "there should be loads of aliens". That nuance has been realised in recent years as we understand more about the environment, the cosmos and ourselves.
Are you arguing against this being the worst case because it would be a possible unintended consequence? That's not how we usually evaluate actions. For instance one might drive the wrong way down a One Way street with the best of intentions. The fact that we don't set out intending to kill anyone doesn't mean we should omit the possibility of unintentional fatal collisions from our evaluation.
Making something for the purpose of being more efficient to work for you and then making it self-aware is a bad idea. You can make something sentient and learn something useful from it but the way that people like to go with it, it wouldn't work out.
AI of the sort we're talking about, which in its day was known as Hard AI, was always on the pure research side. It wasn't about efficiency or any other direct commercial goal.
We do create sentient beings quite regularly. We call them children. We know the worst cases there, whether in crippling medical conditions that condemn a child to a desperately low quality of life, or the prospect of giving life to a future mass-murderer, or contributing to the deterioration of the biosphere by adding to the pressure of human population. We don't let those possible scenarios stop us bringing sentient life into the world.
Machine sentience may well be a bad idea, and something a scientist chooses not to do (just as many people choose childlessness). I'm not seeing any clear reason to rule out the possibility of such research, though. While there may be new ethical questions arising, I'm not seeing any obviously likely Doomsday scenarios.
Children aren't a good comparison for this. You make them but they're still human. They're pretty much there just to keep humans going, and the comparison for worst cases for this would be that they kill and replace you. They're also not things.
Well they do replace you, but not usually by murdering you.
I still don't see any serious downside to pure AI research including research into machine-created sentience. It's been without serious controversy since the dawn of the electronic computer. It doesn't seem to belong in the list.
I don't trust people to be able to handle the end goal of that well in the slightest.
"All I know is that I don't know, all I know is that I don't know nothing."
But that's what I was saying.
Shouldn't we then stop advancing because that seems to be the thing that ends up destroying everyone? If the answer is "no, we should still try to advance" then it assumes we might have a chance to survive. And if we can, then there should be other civilizations that have also survived because, as you said, we are not special and not the only ones that could be able to survive.
And if we think we will destroy ourselves, then the only way to prevent it is to stop advancing. So the question is, if all civilizations have destroyed themselves because they have advanced too much should we try to stop advancing to prevent that?
If you really think about it, without advancements in technology living would certainly be tougher, but we also wouldn't have the means to destroy everything. We wouldn't be able to blow ourselves up and we wouldn't destroy the amount of environment we do now. So, should we try to stop advancing?
Fermi Paradaox says hi
Or doesn't, as it were
You are doing it right now though...I rather see timetravel being completely impossible.
The cloverfield paradox is the resolution to the Fermi Paradox. Any civilization that becomes advanced enough to try to harness infinite energy by colliding together two bosons will trigger a response from an alien civilization that will destroy them and all parallel versions of them in all timelines.
Immortality would kill us ironically enough as overpopulation would quickly destroy the planet.