• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Math-age: Japanese mathematician claims proof of abc conjecture

Status
Not open for further replies.
D

Deleted member 80556

Unconfirmed Member
While I won't study pure math in university and knowing this has no practical use in my life, this is pretty amazing, I wonder what it talks about Fermat's Last Theorem. That one is very, very interesting.
 

Socreges

Banned
Numbers have become more real. A few more discoveries like this and you'll be able to hold a 2 in your hand.

Someone will make a thread on GAF about it, and you will post in that thread.

Triangles as we know it will never be the same. Keep this next part under your hat though.... word on the street is this may effect a couple rectangles as well. If I were you I'd put all my money into circles. You just never know.
Respect for these posts. Each one made me laugh.
 
Hah, if there's an easy way to test primes, hacking also got so much easier.

The thing about a rigorous analytical proof of something like this is that it's just a confirmation. Things like the Riemann hypothesis or Fermat's Last Theorem were always assumed true and tested for astronomical amounts of numerical combinations. It was already "true" (just like the above and many other mathematical conjectures), this just proves it analytically.
 

The Technomancer

card-carrying scientician
As someone who is in precalculus, I can confirm your suspicions that being able to count oneself among the "mathematical elite" is indeed awesome.

Understanding math is awesome on both sides of the spectrum. From an engineer's perspective I love knowing how to use the Laplace transform to solve differential equations. From the mathematicians standpoint I'm absolutely fascinated with what I know about Number Theory
 
I'd like to share a quote by John von Neumann, one of the greatest mathematicians of the 20th century:

"A large part of mathematics which becomes useful developed with absolutely no desire to be useful, and in a situation where nobody could possibly know in what area it would become useful; and there were no general indications that it ever would be so. By and large it is uniformly true in mathematics that there is a time lapse between a mathematical discovery and the moment when it is useful; and that this lapse of time can be anything from 30 to 100 years, in some cases even more; and that the whole system seems to function without any direction, without any reference to usefulness, and without any desire to do things which are useful."

Mathematics is quite often developed well before any practical use is known. The field of number theory, of which this problem is from, was thought to be completely useless in the real world for a long time. That is until people realized it had major implications in cryptography, which has done things such as help break Nazi codes in World War II to powering encrypted internet communications today. Stating that a new advancement in mathematics is "theoretical wankery with no practical application" is only really showing how very clueless about the subject you are.

It's very true that I know nothing about theoretical mathematics and I wasn't attempting to say what he did wasn't worthy of doing, just from reading the story it sounded like something so complex that its application or even understanding by others seems limited in the extreme.
 
i5UdJkusIWsLH.gif

is that sean connery?
 

twobear

sputum-flecked apoplexy
this sounds like theoretical wankery with no practical application but maybe I'm wrong.

and yours is a terribly useless one. If you have some arcane knowledge about this that the rest of us don't then please, share.

Well, there's two points that I think are relevant. The first is that it's often impossible to say whether something has practical application until somebody makes practical application of it. Riemann's work on non-Euclidean geometry seemed like 'theoretical wankery' until Einstein demonstrated some 60 years later that the geometry of the universe is non-Euclidean.

The second is that, even if this particular proof is never vital in some practical application, it's possible that various conjectures proven along the way might be in branches of mathematics with practical application that are used later.

Finally what does it matter if it's 'theoretical wankery'? Mathematics has value in and of itself and doesn't need to be put to practical use to be worthwhile.
 

XiaNaphryz

LATIN, MATRIPEDICABUS, DO YOU SPEAK IT
I know that this is a copy/paste, but I think you should have modified the text to say something more like a^n + b^n = c^n since we don't have superscript.

I had to run to a meeting, so it was a bit of a rush job. I'll clean it up now!
 

thetrin

Hail, peons, for I have come as ambassador from the great and bountiful Blueberry Butt Explosion
this sounds like theoretical wankery with no practical application but maybe I'm wrong.

Yes, let's not bother with intellectual discovery unless it directly affects our lives. That seems like a Great direction for society.
 

pigeon

Banned
The more we learn about factorization, the more likely we are to be able to solve problems involving factorization. Integer factorization happens to be an NP problem -- it's very hard to solve on a computer in a reasonable amount of time. Finding a polynomial solution to an NP problem would make it much more likely (and probably significantly easier) to find polynomial (read quick) solutions to all NP problems. It is believed by many that artificial intelligence could rely on solutions to NP problems.

In other words THIS GUY IS INVENTING TALKING ROBOTS
 

Axalon

Member
It is believed by many that artificial intelligence could rely on solutions to NP problems.

I would half disagree here, and say only a perfect AI that's always right would need to rely on solutions to NP problems. If you're going for human-like AI, heuristic and probabilistic algorithms are good enough, and probably better, since those algorithms are both faster and occasionally make mistakes (as humans do).
 

Horseticuffs

Full werewolf off the buckle
I don't fucking get it, but way to go Japanese fellow! Thanks for advancing humanity.

See shit like this is important because a dude way smarter than us may have solved a problem which may, in the future, help other dudes way smarter than us make life better for our ancestors.

That's really the best way a layman can grasp it, I guess.
 

XiaNaphryz

LATIN, MATRIPEDICABUS, DO YOU SPEAK IT
Commentary taken from http://www.math.columbia.edu/~woit/wordpress/?p=5104

Jordan Ellenberg at Quomodocumque reports here on a potential breakthrough in number theory, a claimed proof of the abc conjecture by Shin Mochizuki.

...

Jordan is an expert of this kind of thing, and he has some of the best mathematicians in the world (Terry Tao, Brian Conrad and Noam Elkies) commenting, so his blog is the place to get the best possible idea of what is going on here. After consulting a couple experts, it looks like this is a very interesting and possibly earth-shattering moment for this field of mathematics.

Blog in question being referred to in the above is here: http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/
 

Noshino

Member
Princeton by 16, PhD by 22. Makes me feel like an asshole for sleeping all throughout high school.

I would half disagree here, and say only a perfect AI that's always right would need to rely on solutions to NP problems. If you're going for human-like AI, heuristic and probabilistic algorithms are good enough, and probably better, since those algorithms are both faster and occasionally make mistakes (as humans do).

wouldnt an error prone (closer to human like) AI defeat the purpose?....

I don't fucking get it, but way to go Japanese fellow! Thanks for advancing humanity.

See shit like this is important because a dude way smarter than us may have solved a problem which may, in the future, help other dudes way smarter than us make life better for our ancestors.

That's really the best way a layman can grasp it, I guess.

i think you meant descendants?
 

firehawk12

Subete no aware
The thing about a rigorous analytical proof of something like this is that it's just a confirmation. Things like the Riemann hypothesis or Fermat's Last Theorem were always assumed true and tested for astronomical amounts of numerical combinations. It was already "true" (just like the above and many other mathematical conjectures), this just proves it analytically.
Hrm, then I suppose it becomes a question of whether or not computer scientists can turn this proof into some kind of working methodology/algorithm then?

Wait... how?
I'm going to admit my ignorance here, because I haven't seriously thought about math in nearly a decade, but iirc a lot of encryption (at least 5-10 years ago) is based on the idea that it's not trivial to test whether or not large numbers or prime. So if there's a way to easily test a number, then people need to figure out new math to make things secure again.
 

XiaNaphryz

LATIN, MATRIPEDICABUS, DO YOU SPEAK IT
i think you meant descendants?

Not unless he's hoping time travel gets invented! I mean, what if our ancestors only survived due to someone in the future traveling back to them and teaching them basic sciences like agricultural?
 

Caelus

Member
Not unless he's hoping time travel gets invented! I mean, what if our ancestors only survived due to someone in the future traveling back to them and teaching them basic sciences like agricultural?

What if...

what if we evolved into the greys and made ufos and time travel and went back in time and taught ourselves?
 

Axalon

Member
wouldnt an error prone (closer to human like) AI defeat the purpose?....

Again, depends on objective. If you want an AI to make decisions for you and do tasks that require correctness, yes, that'd be bad and defeat the purpose. If you want AI to use as a representation of another person or living creature in a video game or some other interactive medium (holodecks!) and behave realistically, not so much. Hence why I said "half disagree".
 

GamerSoul

Member
Nearly all encryption techniques are built on the fact that factoring products of large prime numbers are difficult. Like, it would take longer than the universe has currently existed long, given enough digits, whereas multiplying the primes to generate said number would take about a minute or so. If we can factor large numbers quickly, well, all that stuff you thought was private is, well, no longer private. You'd be able to crack codes in milliseconds. The fact that there's no quick way to factor large numbers (of which ones made by large prime numbers are exceptionally difficult) is what keeps current encryption algorithms working.

See http://en.wikipedia.org/wiki/Integer_factorization

I had a grad student teacher who majored in cryptology basically echoed this sentiment in my discrete math class. It's crazy how his brief talk on the subject somehow became relevant again.
 

Lonely1

Unconfirmed Member
In many of this kind of 'breakthroughs', the journey is more important than the destination. The reason of 'why' this is true, and the techniques developed to reach it, are likely to be the biggest advance, it can really clear the path for new results and a deeper understanding of the mathematical structures.
 

firehawk12

Subete no aware
Nearly all encryption techniques are built on the fact that factoring products of large prime numbers are difficult. Like, it would take longer than the universe has currently existed long, given enough digits, whereas multiplying the primes to generate said number would take about a minute or so. If we can factor large numbers quickly, well, all that stuff you thought was private is, well, no longer private. You'd be able to crack codes in milliseconds. The fact that there's no quick way to factor large numbers (of which ones made by large prime numbers are exceptionally difficult) is what keeps current encryption algorithms working.

See http://en.wikipedia.org/wiki/Integer_factorization
Whoops, I missed this. Basically what he said. lol
It's a total game changer if testing primes is trivial.
 

demon

I don't mean to alarm you but you have dogs on your face
http://www.nature.com/news/proof-claimed-for-deep-connection-between-primes-1.11378



The several papers cited above that cover the proof:

Mochizuki, S. Inter-universal teichmuller theory I: construction of Hodge Theatres (2012). available at http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal Teichmuller Theory I.pdf
Mochizuki, S. Inter-universal teichmüller theory II: Hodge–Arajekekiv-theoretic evalulation (2012). available at http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal Teichmuller Theory II.pdf
Mochizuki, S. Interuniversal teichmüller theory III: canonical splittings of the log-theta-lattice (2012). available at http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal Teichmuller Theory III.pdf
Mochizuki, S. Interuniversal teichmüller theory IV: log-volume computations and set-theoretic foundations (2012). available at http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-universal Teichmuller Theory IV.pdf

And the mathematician's home page: http://www.kurims.kyoto-u.ac.jp/~motizuki/papers-english.html
8SWnk.jpg
 

Kagami

Member
Whoops, I missed this. Basically what he said. lol
It's a total game changer if testing primes is trivial.
Just to clarify, this affects public key encryption, like SSL connections to websites, not private key encryption, like password-encrypted hard drives.
 

Kenka

Member
Damn, algos that factorize products of primes with P complexity.

What would we rely for security on once we get these algorithms and photonic computers to calculate even faster than now ?
 

Dresden

Member
Nearly all encryption techniques are built on the fact that factoring products of large prime numbers are difficult. Like, it would take longer than the universe has currently existed long, given enough digits, whereas multiplying the primes to generate said number would take about a minute or so. If we can factor large numbers quickly, well, all that stuff you thought was private is, well, no longer private. You'd be able to crack codes in milliseconds. The fact that there's no quick way to factor large numbers (of which ones made by large prime numbers are exceptionally difficult) is what keeps current encryption algorithms working.

See http://en.wikipedia.org/wiki/Integer_factorization

I look forward to getting my credit accounts hacked every two hours.
 

slider

Member
Not being scientifically or mathematically minded at all I remember my bro, whilst studying physics, explaining something mathematical to me.

He explained it so well and closed with "you see how beautiful maths is?" And at that moment, when he'd given me that bit of clarity, it fucking was.

Gosh, I wish I could remember even a sliver of the detail he used that day.

Go maths!!
 
Status
Not open for further replies.
Top Bottom