• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Meaning of A.I.

GeekyDad

Member
Spent more time over the past few days tinkering with the latest "A.I." art tools for commoners such as myself, and honestly -- and not saying this to be provocative -- I feel the A.I. in this "A.I." tech would be more appropriately interpreted as "automation interface," or something of that nature.

It's very useful tech, and it's bound to, in the next year or two, become incredibly user-friendly. But the tech I'm using does not grow, it does not learn, it does not evolve without us and our input. It simply pulls randomly from a database (which we input -- the "evolution" part of the equation). That's automation utilizing parameters, the main one being "random," which seems to be the "intelligence" part software creators are hiding behind.

Again, good tools, thankful for 'em, but it ain't A.I. as we generally identify the term.
 

k_trout

Member
I feel the current blanket use of the term A.I is a marketing tool and doesn't reflect reality

if a complex algorithm dreams of electric sheep does that make it a sheepdog lol
 

Mr Reasonable

Completely Unreasonable
AGI is what most people are talking about when they talk about AI, the ability to learn and apply knowledge, and significantly, acquire it. A lot of the images of AI from Sci Fi and so on are more in keeping with that image, rather than what a lot of people are getting when they're using AI powered interfaces, where someone will suggest saying

"find me a restaurant that serves spaghetti that's well reviewed"

and having your search engine spit out a local Italian restaurant's phone number.

Depending on how much you're prepared to think about it, one of the most interesting/scary (delete as appropriate) stories about AI is the one from a few months ago that explains that though AI is "trained" on information (in OPs example, shown vast numbers of pieces of art and associated dsta about the style, the artist, etc), Google's AI had, without being trained, learnt how to speak Bengali, and nobody knew how or why it had happened.


That would seem to be something quite different to accessing a database or doing a Google search.

Worth noting that since this news a former employee has disputed the claim, but if that is something that has happened and will happen again, then we're going to see some interesting things happen.

I've said this before on here, but all the explosion of AI interest and use came about when chat gpt4 launched and suddenly many previously impossible things were possible overnight. Chat Gpt5 will be coming. If the time between Chat Gpt3 and 4 is roughly the same as it is leading up to chat Gpt5, and it represents a similar advance, then we could see a sudden increase in the number of services using AI and another dramatic increase in it's capability, which could bring about a number of big changes in less than a year.

If you're interested, there's a book called Our Final invention that I think is worth a look, and there's a (long) podcast by Lex Fridman with Max Tegmark you could listen to as well. I'm obviously not an expert, but as far as easy to understand stuff goes, I feel like I benefitted from those.
 
Last edited:

Hudo

Member
AGI is what most people are talking about when they talk about AI, the ability to learn and apply knowledge, and significantly, acquire it. A lot of the images of AI from Sci Fi and so on are more in keeping with that image, rather than what a lot of people are getting when they're using AI powered interfaces, where someone will suggest saying

"find me a restaurant that serves spaghetti that's well reviewed"

and having your search engine spit out a local Italian restaurant's phone number.

Depending on how much you're prepared to think about it, one of the most interesting/scary (delete as appropriate) stories about AI is the one from a few months ago that explains that though AI is "trained" on information (in OPs example, shown vast numbers of pieces of art and associated dsta about the style, the artist, etc), Google's AI had, without being trained, learnt how to speak Bengali, and nobody knew how or why it had happened.


That would seem to be something quite different to accessing a database or doing a Google search.

Worth noting that since this news a former employee has disputed the claim, but if that is something that has happened and will happen again, then we're going to see some interesting things happen.

I've said this before on here, but all the explosion of AI interest and use came about when chat gpt4 launched and suddenly many previously impossible things were possible overnight. Chat Gpt5 will be coming. If the time between Chat Gpt3 and 4 is roughly the same as it is leading up to chat Gpt5, and it represents a similar advance, then we could see a sudden increase in the number of services using AI and another dramatic increase in it's capability, which could bring about a number of big changes in less than a year.

If you're interested, there's a book called Our Final invention that I think is worth a look, and there's a (long) podcast by Lex Fridman with Max Tegmark you could listen to as well. I'm obviously not an expert, but as far as easy to understand stuff goes, I feel like I benefitted from those.
Interesting. I might try to look up what exactly that "AI" is. It's probably a BERT-like LLM that was pre-trained (as every BERT-like is anyway) and then some zero-shot learning happened (which is an active research area) on that LLM based on a task they've given it. But a net or an ensemble of nets cannot learn anything meaningful on its own. It might learn something related to it's actual training task (or during inference via feedback-refinement). But autonomously? Nah. I also think there's some PR hyperbole going on anyway.

Unless we're talking about a system that attempts to model artificial general intelligence (AGI) based on the classic "agents get an objective and actions in a "possibility space" to solve and try to optimize the best way of solving it while adhering to certain constraints"; essentially an optimization problem with a self-learning metaheuristic, if we want to be reductive. And I think earlier this year a Stanford paper (for what that's worth nowadays anyway) utilized LLMs to turbocharge that, with some interesting results.
 

Mr Reasonable

Completely Unreasonable
Hudo Hudo

I'd love to reply but...

Dont Get It Schitts Creek GIF by CBC
 

Hudo

Member
Hudo Hudo

I'd love to reply but...

Dont Get It Schitts Creek GIF by CBC
Don't worry about it.

I looked at the tweet now and it says that Bengali made up 0.06% of PaLMs (the LLM they're apparently training) training data. Along with some other languages in from the same geographic region. So my guess is that Bengali and its "surrounding languages" share enough similarities that Bengali was just learned implicitly with the other things since it mapped to the same manifolds (or let's say objects) in latent space.

TL;DR: Nothing really noteworthy happened, imho, other than some languages share similarities which helps to learn them.
 
Last edited:

GeekyDad

Member
Whelp, I've spent the past two days -- all day, it's addictive -- making "a.i.-generated" videos. I now feel quite confident in saying that the "a" in ai really should stand for automation or algorithm. It's not artificial intelligence. As a matter of fact, that term is contradictory. But, the software doesn't learn. We do. We're getting better videos and better quality by learning how to enter better prompts, and the developers are learning how to hone in on the best commands and tighten the algorithm, etc. But "it" ain't learnin'. That much seems clear. And if and when inanimate...things do begin to learn, it will, in my mind, be a natural process, not an artificial one.
 

E-Cat

Member
Spent more time over the past few days tinkering with the latest "A.I." art tools for commoners such as myself, and honestly -- and not saying this to be provocative -- I feel the A.I. in this "A.I." tech would be more appropriately interpreted as "automation interface," or something of that nature.

It's very useful tech, and it's bound to, in the next year or two, become incredibly user-friendly. But the tech I'm using does not grow, it does not learn, it does not evolve without us and our input. It simply pulls randomly from a database (which we input -- the "evolution" part of the equation). That's automation utilizing parameters, the main one being "random," which seems to be the "intelligence" part software creators are hiding behind.

Again, good tools, thankful for 'em, but it ain't A.I. as we generally identify the term.
Just wait like 2 years ffs
 

Wildebeest

Member
Whelp, I've spent the past two days -- all day, it's addictive -- making "a.i.-generated" videos. I now feel quite confident in saying that the "a" in ai really should stand for automation or algorithm. It's not artificial intelligence. As a matter of fact, that term is contradictory. But, the software doesn't learn. We do. We're getting better videos and better quality by learning how to enter better prompts, and the developers are learning how to hone in on the best commands and tighten the algorithm, etc. But "it" ain't learnin'. That much seems clear. And if and when inanimate...things do begin to learn, it will, in my mind, be a natural process, not an artificial one.
If you are just playing with the end user models, you are not really seeing the learning or training part of the process. For example, when Stable Diffusion came out, people were able to do things to train the text to image models on their own face, with consumer GPUs, and then use the updated models to put their face on the Mona Lisa or whatever. In terms of things like natural methods for learning, there are already techniques for using one chat program to create text for another to learn from. There are benefits to this. But we don't really find value in training chatbots to talk to each other in robot languages they develop themselves, which no human could ever really hope to understand.
 
Top Bottom