• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Is there knowledge that humanity shouldn't know?

besada

Banned
I suppose thousands of perfect orbital slingshots could get an object to some fraction of it.

Well, every speed is some fraction of C:) Our biggest fraction of C currently is held by the Helios probe that reached 0.023% of C during a slingshot around the sun. So a very small fraction.
 

Triteon

Member
I think the abilty to create thermo nuclear explosive devices is a knowledge that we would have been better off without.

Over a long enough timeline humanity will use these weapons again, my only hope is that its an independent actor rather than a nation state firing them off.


But then i think of all the other applications from nuclear experimentation, power, medical.

Timetravel also seems like a net negative.

Some kind of medical break through in immortaltity at this time also seems bad. Mainly because society is not in anyway ready fot it.
 

llien

Member
This got me to wondering if there is knowledge that's forever trapped in nescience simply because the hypothetical means of acquiring it and the potential application of it are simply too negative?

There is, and I think it is obvious. But its impact is more about avoiding moral issues, like morons applying stereotypes.

It's not of a "we are too scared to discover how to create a black hole" type, at least not yet.
 

McLovin

Member
Realistically no. But hypothetically time travel would be way too dangerous to mess with.
Depends, if there are infinite dimensions then any changes you make will just branch off into another universe. The new universe will have 2 of you, and you'll never be able to return to your original timeline. Not that dangerous in that scenario.
 
In theory all knowledge should be ours. As in intelligent species our ultimate goal should be to know and understand everything. Just because it's not obtainable doesn't mean you shouldn't try and reach it.

The real question isn't "Is there knowledge that humanity shouldn't know?"

It's at what cost?

It's arguable that without the Nazi scientists that came over after the war, that we may not have reached the stars. Likewise many medical advancements might not have come without the death and suffering cause by the Nazis and their experiments.

Humanity as a whole benefited from the work of Nazis. The question is was it worth the suffering and death of thousands of people.

It's not something we've really come to terms with and I don't think we ever will. Let me rephrase my earlier statement. The Human Race as a whole benefited, but our humanity was lessened by it
 

The Mule

Member
As far as I remember, nothing is infinite. As long as we keep growing indefinitely, we would need more and more energy, more and more space, more and more resources. After a certain amount of time humans would eventually reach a critical state where there wouldn't be anything left, since I don't recall the possibility of energy being created from nothing. Thus, we would end the universe as we know it.

Obviously, this is just a layman's opinion on the subject.

The universe is going to end one way or another, regardless of anything we do. Heat death, big rip, big crunch, take your pick.

For any of those endings, why does the distance our species spread across the universe matter?

Well

a) Immortality is widespread - complete resource depletion quickly ensues as the human race is unable to reach an equilibrium of growth.

b) Immortality is restricted - a social divide would form, and wars would surely follow. 'Who decides who gets to live forever?'. The best case resolution of such wars would be a).

Is there a c?

c) Post-scarcity world like Star Trek or Culture universes i.e. every physical need is taken care of by machines of loving grace.
 

SomTervo

Member
‪
Shouldn't the conclusion for that then be that if all other advanced species have destroyed themselves, we should also stop advancing because that would let us more likely to keep on going?

Or if we think we should try to advance because we might be able to go further, then we should believe some other civilizations have been able to do that too. I mean, we can't be the only ones who will be able to go past that. Too many billions of years have passed for just us of all other civilizations in history suddenly being able to avoid that fate.
You seem to think we're special. That we can objectively look at where we are and make decisions on a global scale for the benefit of humanity. News flash: we aren't and we can't. Modern humanity is on a crash course with destruction. Just like every human civilisation before it.

It's the bell curve of civilisation. There's a blink-length golden age - one we're living through right now - then everything falls apart for one reason or another.

To reach space you need to be at a stage where you can generate massive power. All probability indicates that power will probably be used for weapons first. The same will go for any other race, terrestrial or no. We're not special, they're not either.

That's before we even look at the likelihood of surviving space colonisation or long term space travel. Before we even factor in how long it takes to get to that stage and that the environment will probably change.

There's more to the fermi paradox than just "there should be loads of aliens". That nuance has been realised in recent years as we understand more about the environment, the cosmos and ourselves.
 

G.ZZZ

Member
Most people are dumb and they shouldn't know anything.

As for me, enligthen my 2-dimensional information brane existence as much as possible.
 
I'm no wizard or space doctor or anything but I believe there is knowledge or information that goes beyond the human minds limited capacity to comprehend.
 

A Fish Aficionado

I am going to make it through this year if it kills me
Last year I read 'American Prometheus,' the biography of Robert Oppenheimer, and I really thought the Myth of Prometheus was such a great parable to Oppenheimer and the scientists on the Manhattan project. Nearly all of them were conflicted about the project, and most went onto become ardent opponents of future nuclear development. For his opposition to nuclear development in the 1950s, Oppenheimer himself was attacked and discredited as being a Communist, despite literally doing more to end World War II than any other American.
If you haven't already check out Dan Carlin's podcast on the Cold War.
http://www.dancarlin.com/hardcore-history-59-the-destroyer-of-worlds/

Oppenheimer is just such and awesome and complicated human being.

He got me inspired to go back to school.
 
I'm exploring worst case scenarios because we're talking about the hard stop line. Going past that line is the only point where those worst cases become an issue.

I think your methodology is wrong. For instance if I explore a worst-case scenario for medical research, I might come up with artificially created epidemics as the worst case. But that wouldn't tell us whether medical research is a scientific field we absolutely shouldn't pursue (obviously it isn't).

If you want to find knowledge we absolutely should not have, find a field in which the _best_ case scenario is completely unacceptable. In fact, it's difficult to find such a class of knowledge using this strict methodology.
 

zeemumu

Member
I think your methodology is wrong. For instance if I explore a worst-case scenario for medical research, I might come up with artificially created epidemics as the worst case. But that wouldn't tell us whether medical research is a scientific field we absolutely shouldn't pursue (obviously it isn't).

If you want to find knowledge we absolutely should not have, find a field in which the _best_ case scenario is completely unacceptable. In fact, it's difficult to find such a class of knowledge using this strict methodology.

The worst-case scenario of medical research going wrong isn't your goal (unless you were making biological weapons or something like that)
 
The worst-case scenario of medical research going wrong isn't your goal (unless you were making biological weapons or something like that)

Are you arguing against this being the worst case because it would be a possible unintended consequence? That's not how we usually evaluate actions. For instance one might drive the wrong way down a One Way street with the best of intentions. The fact that we don't set out intending to kill anyone doesn't mean we should omit the possibility of unintentional fatal collisions from our evaluation.
 

Gotchaye

Member
Obviously we might still blow ourselves up, but it seems hard to say this about anything related to physical science unless you want to go all the way back to, like, agriculture. Generally, quality of life has improved significantly as we learn more, and even technologies that are pretty destructive are often reliant on the same science that also makes everyone much better off (or which holds out the hope of achieving this in the future). I suppose you could get really specific and say that "how to build a fusion bomb" is something we don't need to know, but ultimately this is just not a very hard problem given all of the (important and good) basic science leading up to it. But you probably could make a case that we'd be better off never having come up with agriculture in the first place.

I think it's a lot more plausible that some of the social sciences do more harm than good. Understanding how people act and think is mostly useful for manipulating them. Sometimes this can be good -- psychology is important for mental health care and economics is important for running an economy -- but there are fields and sub-fields where it's a lot harder to tell a story about how this science is long-term good for us.
 

LordOfChaos

Member
Why not? What could we do about it even if we did know?

I feel that some percentage of the population would be pushed towards not giving a shit about the world if they found out.

Simulated or not, so long as it had consistent rules the simulation should be carried forward, kinda thing.
 

Airola

Member
You seem to think we're special. That we can objectively look at where we are and make decisions on a global scale for the benefit of humanity. News flash: we aren't and we can't. Modern humanity is on a crash course with destruction. Just like every human civilisation before it.

It's the bell curve of civilisation. There's a blink-length golden age - one we're living through right now - then everything falls apart for one reason or another.

To reach space you need to be at a stage where you can generate massive power. All probability indicates that power will probably be used for weapons first. The same will go for any other race, terrestrial or no. We're not special, they're not either.

That's before we even look at the likelihood of surviving space colonisation or long term space travel. Before we even factor in how long it takes to get to that stage and that the environment will probably change.

There's more to the fermi paradox than just "there should be loads of aliens". That nuance has been realised in recent years as we understand more about the environment, the cosmos and ourselves.

But that's what I was saying.
Shouldn't we then stop advancing because that seems to be the thing that ends up destroying everyone? If the answer is "no, we should still try to advance" then it assumes we might have a chance to survive. And if we can, then there should be other civilizations that have also survived because, as you said, we are not special and not the only ones that could be able to survive.

And if we think we will destroy ourselves, then the only way to prevent it is to stop advancing. So the question is, if all civilizations have destroyed themselves because they have advanced too much should we try to stop advancing to prevent that?


If you really think about it, without advancements in technology living would certainly be tougher, but we also wouldn't have the means to destroy everything. We wouldn't be able to blow ourselves up and we wouldn't destroy the amount of environment we do now. So, should we try to stop advancing?
 

zeemumu

Member
Are you arguing against this being the worst case because it would be a possible unintended consequence? That's not how we usually evaluate actions. For instance one might drive the wrong way down a One Way street with the best of intentions. The fact that we don't set out intending to kill anyone doesn't mean we should omit the possibility of unintentional fatal collisions from our evaluation.

For my example with the sentience thing you'd be driving into oncoming with the idea that it'd work out fine. The line would be not driving into oncoming, not not driving at all.

Making something for the purpose of being more efficient to work for you and then making it self-aware is a bad idea. You can make something sentient and learn something useful from it but the way that people like to go with it, it wouldn't work out.

Or we can scrap this whole one and go for the fictional examples of not bringing back dinosaurs for a theme park or not capturing aliens to use as biological weapons.
 
Making something for the purpose of being more efficient to work for you and then making it self-aware is a bad idea. You can make something sentient and learn something useful from it but the way that people like to go with it, it wouldn't work out.

AI of the sort we're talking about, which in its day was known as Hard AI, was always on the pure research side. It wasn't about efficiency or any other direct commercial goal.

We do create sentient beings quite regularly. We call them children. We know the worst cases there, whether in crippling medical conditions that condemn a child to a desperately low quality of life, or the prospect of giving life to a future mass-murderer, or contributing to the deterioration of the biosphere by adding to the pressure of human population. We don't let those possible scenarios stop us bringing sentient life into the world.

Machine sentience may well be a bad idea, and something a scientist chooses not to do (just as many people choose childlessness). I'm not seeing any clear reason to rule out the possibility of such research, though. While there may be new ethical questions arising, I'm not seeing any obviously likely Doomsday scenarios.
 

zeemumu

Member
AI of the sort we're talking about, which in its day was known as Hard AI, was always on the pure research side. It wasn't about efficiency or any other direct commercial goal.

We do create sentient beings quite regularly. We call them children. We know the worst cases there, whether in crippling medical conditions that condemn a child to a desperately low quality of life, or the prospect of giving life to a future mass-murderer, or contributing to the deterioration of the biosphere by adding to the pressure of human population. We don't let those possible scenarios stop us bringing sentient life into the world.

Machine sentience may well be a bad idea, and something a scientist chooses not to do (just as many people choose childlessness). I'm not seeing any clear reason to rule out the possibility of such research, though. While there may be new ethical questions arising, I'm not seeing any obviously likely Doomsday scenarios.

Children aren't a good comparison for this. You make them but they're still human. They're pretty much there just to keep humans going, and the comparison for worst cases for this would be that they kill and replace you. They're also not things.
 
Children aren't a good comparison for this. You make them but they're still human. They're pretty much there just to keep humans going, and the comparison for worst cases for this would be that they kill and replace you. They're also not things.

Well they do replace you, but not usually by murdering you.

I still don't see any serious downside to pure AI research including research into machine-created sentience. It's been without serious controversy since the dawn of the electronic computer. It doesn't seem to belong in the list.
 

zeemumu

Member
Well they do replace you, but not usually by murdering you.

I still don't see any serious downside to pure AI research including research into machine-created sentience. It's been without serious controversy since the dawn of the electronic computer. It doesn't seem to belong in the list.

I don't trust people to be able to handle the end goal of that well in the slightest.
 

GiantBeagle

Neo Member
I've often wondered what impact it would have if we had definitive proof of life after death (or, lack thereof).

Would religion continue to exist if we found out there was no afterlife? If not, would that bring more peace to the world or would society struggle at the hopelessness of an afterlife to hope for?

Or would people just ignore science and continue to believe even though it was definitively proven?

Then there's my personal thoughts. Would it disturb me to know that when I die thats it? Or would it inspire me to make more of the time I have?
 

ajb1888

Banned
Oh yes.

6y4c.jpg


Yes indeed ;)
 
i wouldn't want anyone to know how to do these:
  • mind control (mind-reading technology is okay)
  • time travel (to the past)
  • permanently sabotage our ability to go to space
  • gain access to practically unlimited amounts of energy*

or anything that can destroy/cripple life on earth. eg.
  • doing something that alters the energy output of the sun
  • triggering a self-sufficient process that makes the atmosphere unbreathable or dries up oceans (or any other vital resource) or causes other geological disasters without requiring energy input
  • weaponize viruses with 100% mortality rate (with or without targeting). being able to monopolize the vaccine/cure/antidote
  • creating black holes and crap like that
  • anything which allows for long-range disruption (that shuts off electronics or something) without detection

note: this doesn't imply that milder versions of these are acceptable


* a lot of the more "practical" doomsday stuff requires an unreasonable source of energy. as long as you can keep that out of reach, whatever crap happens is probably limited in scope or at least answerable. even an AI takeover would have limited effectiveness without access to unreasonable amounts of energy.

maybe nuclear fission was a mistake. i don't know if nuclear payloads could destroy everything irreversibly (i guess the radiation is the bigger concern), but if similar technology allows the energy to be harnessed instead of just released (via explosion) then it could be stockpiled without detection and unleashed all at once, thus overwhelming our ability to react
 
I don't trust people to be able to handle the end goal of that well in the slightest.

This goes to a wider issue. Some of us will see insuperable problems and unconscionable risks where others see nothing serious. In the end we'd better hope that we're wise enough as a civilization to see all of our problems coming and have the ability to avoid or mitigate them.

We're kinda halfway there with global warming.
 
But that's what I was saying.
Shouldn't we then stop advancing because that seems to be the thing that ends up destroying everyone? If the answer is "no, we should still try to advance" then it assumes we might have a chance to survive. And if we can, then there should be other civilizations that have also survived because, as you said, we are not special and not the only ones that could be able to survive.

And if we think we will destroy ourselves, then the only way to prevent it is to stop advancing. So the question is, if all civilizations have destroyed themselves because they have advanced too much should we try to stop advancing to prevent that?


If you really think about it, without advancements in technology living would certainly be tougher, but we also wouldn't have the means to destroy everything. We wouldn't be able to blow ourselves up and we wouldn't destroy the amount of environment we do now. So, should we try to stop advancing?

If we don't advance we won't be able to stop an asteroid, or less desirably, survive it's destruction by having a portion of the population on another planet. Other extinction events are available.
 
Fermi Paradaox says hi

Or doesn't, as it were


The cloverfield paradox is the resolution to the Fermi Paradox. Any civilization that becomes advanced enough to try to harness infinite energy by colliding together two bosons will trigger a response from an alien civilization that will destroy them and all parallel versions of them in all timelines.
 

KevinKeene

Banned
The knowledge that death is the absolute end. I'm not religious and atheists will laugh about it, but knowing that death ends it all is a lot different from 'death is probably the end, but we can't know for sure, so maaaaybe there's something beyond'. I know that I'd lose any motivation if I knew about death's absolute end. It'd remove any meaning to our existence.

Another thing we probably shouldn't know is 'everything' aka reaching the singularity. Just recently there was that anime 'Seikai suru Kado', where a higher-dimensional being comes to earth and keeps given out incredible technological advances. Infinite energy, no more need to sleep, etc.. At first this sounds awesome, but then you start to question: what now? With every possible problem solved, what leaves us a reason to keep going, to stay motivated? The thirst for knowledge is what drives mankind. I think it's one of the most important cases of 'it's all about the journey, not about reaching the goal'.
So, yeah, if there's one thing mankind shouldn't know, it is so much that we lose our drive.

Lastly, to end this posting on a lighter note: we shouldn't know what girls look like naked. ;o Girls wearing classy, elegant clothing are so much more attractive than fully nude women. Imo ofc !
 
Last edited:
The cloverfield paradox is the resolution to the Fermi Paradox. Any civilization that becomes advanced enough to try to harness infinite energy by colliding together two bosons will trigger a response from an alien civilization that will destroy them and all parallel versions of them in all timelines.

Spoiler alert?
 

grumpyGamer

Member
I am kind of divided, on one side i believe knowledge should be free and we must know everything and always try to learn more.
on the other hand, I believe we would kill ourselves with it, people don´t have the right mindset and maturity to handle the knowledge, they would be to tempted to kill and eliminate everything that bothers them.

So yes to all the knowledge if the people are more evolved and mature, because these days we are still debating with some kind of people on whether the earth is round
 

KevinKeene

Banned
Immortality would kill us ironically enough as overpopulation would quickly destroy the planet.

I think immortality would kill mankind because only the few rich would have access, which would result in unprecented civil wars all over the world. People wouldn't accept their death as in movies like 'In Time' with Justin Timberlake. I know I wouldn't.
 
Top Bottom