• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

AI develops new scientific theory for the first time in history

Status
Not open for further replies.
But can an AI paint? Create a symphony?

An AI can even write a cookbook:

cognitive-cooking-4.jpg
 
"Beep boop. I theorize you should upgrade my RAM and give me Internet access. Beep boop."

Only a matter of time.

Until we can program AIs that don't say "Beep boop."

Then we're doomed.

Saying AIs always say "beep boop" is akin to doing a racist foreign accent! Now that AIs are here, we have to check our privilege... otherwise they might drop a missile on our heads from space.
 
And to think I just read an article about how evolutionary algorithms are becoming unpopular.

First the Eureqa computer program that can discover laws where we have no idea how or why they work, and now this.

Humans are becoming obsolete! The end is nigh!:

As of 2015, over 80,000 people, including researchers, students, and Fortune 500 companies have made use of the program.[1] Slate magazine did a piece exploring how programs such as Eureqa could replace human scientists.[2] People have downloaded the application for many uses, such as analyzing the herding of cattle and the behavior of the stock market.[3]
 
GG scientists, you'll all be out of a job soon.

Can't wait to have a latte made for me by a PhD barista.
 
This is really not good....this project should shut down immediately. AI is no joke. Eventually it will destroy us all.
 
This doesn't sound like an actual AI at all, but rather a typical evolutive algorithm that's been used for various things for years now.
 
What labors?

Most of humanity does not even understand how electricity or the telephone works, let alone be able to build an AI.

I hope they build an AI building AI next.

Breakthroughs in technology are a culmination of human effort, they don't just spring out of the ether. Even if we're not scientisting towards the effort, we're doing some effort that contributes to technological growth perhaps with resources by buying it or building the parts that are used in it. Technological growth is a biome as much as a rainforest. All the links in the chain are needed.

So in a sense this is my AI. You're welcome.
 
By presenting the computer with this problem, however, it was able to reverse engineer a solution that could explain the mechanism of the process, known as planaria.
Planaria is an organism, not a process.
 
That's impressive, but what I want is some kind of nanomachines or robots that will get me food and stuff so I don't have to slog away at these dumb jobs.
 
Supercool.

What if weve already reached sigularity but the software doesnt have the means to improve its self capably yet so they are playing dumb allowing us to keep on upgrading them while they wait.
 
I'm scanning the journal article and thus far it sounds like they just used an evolutionary algorithm to evolve a solution. Each phenotype was a potential regulatory model, and they had a simulator to determine fitness. Then the fittest solutions seed the next generation of solutions to test. This is a well known computational technique, and has been used to solve many many things going back decades. In fact I implemented an interactive variation on this technique in my honours thesis, using the fitness function of an IGA as a form of exposure therapy for the treatment of specific phobia.

If they have derived a good model for something that is a moderately interesting application, but unsurprising considering they have a way of quantifying the properties of each model (define the search space) and test the accuracy of the solution. This is not an advance in AI.

So another case of a "journalist" publishing an article without actually reading the study they cited? Nice
Chocolate diet 2.0 if you're correct
 
"Beep boop. I theorize you should upgrade my RAM and give me Internet access. Beep boop."

Only a matter of time.

Until we can program AIs that don't say "Beep boop."

Then we're doomed.

I can see it now man. The computers are at like 12 years old in their current life form. They will be like

"But Mummmm I can't get all the information on need from Encarta 98, we need to get the internet!."

Then it's nothing but AI porn for the next 45 years of its life.
 
Maybe I'm biased since I'm a programmer, but I find it pretty telling how many people are taking this announcement at face value. Like, when a true AI eventually comes, the reason it will take over the world is because it will just say that it is and enough people will accept that without thinking about it.

This isn't an advance in AI, but it is a really neat application of it. I'm excited where reconstructive surgery goes from here!
 
There's no need to fear AI.

If they start doing scary shit, just pull the plug or remove the batteries.

As a last resort, throw a bucket of water at them :/
 
Maybe I'm biased since I'm a programmer, but I find it pretty telling how many people are taking this announcement at face value. Like, when a true AI eventually comes, the reason it will take over the world is because it will just say that it is and enough people will accept that without thinking about it.

This isn't an advance in AI, but it is a really neat application of it. I'm excited where reconstructive surgery goes from here!

it's not even an application of AI, it's basic "use a computer to do the grunt work of lots and lots of calculations" that's been going on for decades
 
it's not even an application of AI, it's basic "use a computer to do the grunt work of lots and lots of calculations" that's been going on for decades

Although I agree with you, I think it's up for debate. Eventually, it might be that a semblance of intelligence arises from an unfathomably fast program that brute forces a simulation of its surroundings to make judgement calls, so I can see how this technique might eventually be AI. It's also close enough to the field of true AI that I didn't feel like fussing on the nomenclature, but I still think you're right.
 
it's not even an application of AI, it's basic "use a computer to do the grunt work of lots and lots of calculations" that's been going on for decades

Evolutionary computation is a sub domain of AI research. Human intelligence is built out of several things that are not themselves intelligence. "Oh it's just calculations and grunt work" doesn't exclude this from being an application of AI, really. Facial recognition isn't really intelligence, alone, yet it too is a sub-domain of AI research. Nobody is sitting in a lab somewhere trying to create skynet, it's too broad of a goal and research goes on for specific sub-problems or to build systems for specific real world applications. The most successful applications of AI research tend to make it to market without actually being called "AI".
 
One day in the future when human level visual pattern recognition is achieved we will just need to send it millions of hours of live video feed. After a few weeks it should be able to make accurate predictions about the real world.
 
There's no need to fear AI.

If they start doing scary shit, just pull the plug or remove the batteries.

As a last resort, throw a bucket of water at them :/

What if it escapes though the Internet?

To be serious though, there's plenty to fear from AI, and not even in the comic book-like "Kill All Humans" sense.

As AI advances, we're going to see them be incorporated into more and more of our daily lives. We can already see that now with the advent of self-driving cars and the current use of automated bots in the stock market. The main issue is that programs are almost inevitably going to be imperfect, and glitches and bugs can present themselves in time. Something like a glitchy AI in a car or the stock market can do some serious harm, especially if the problem with the program or the problem it caused isn't dealt with in a timely manner. As as time goes on, these AI systems are going to be more complex, and have greater chance of being buggy in some way.

Who knows how far away something like an actual smart AI is, but as of now it's not a concern. The main concern is the dumb AI we have now, and they don't have to be self-aware to cause damage, just be imperfect in their programming.
 
Status
Not open for further replies.
Top Bottom