• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Chat GPT 4.5 Turbo stealth drops

Chittagong

Gold Member
I was surprised how fast my recent GPT chat was and how good it is.

Checked on X and people have been seeing the same. Initially GPT refused to say it’s something else than 4.0, but I managed to force out and answer.

k6CNp8R.jpg


So there you have it, GPT 4.5 Turbo has stealth dropped.

Not all my chats go there. I suspect that GPT does big scale A/B testing on accuracy / speed/ cost and rerolls the dice each time you start a new thread.
 

KrakenIPA

Member
That is a very interest query. I suppose this sort of thing has happened before, although in this situation it seems abrupt. It could be a soft rollout har!
 

Ownage

Member
Good stuff. I wonder if this "winter season depression" is merely some marketing bullshit for a soft rollout. Or, maybe it needed some uppers and a cache wipe.
 
OpenAI really seems to have broken through something recently. The GPTs are insanely good and fun. I've been training three - two for work and one for my personal life and it's starting to feel that there's no task they're not at least competent at.
 
Last edited:

Dacvak

No one shall be brought before our LORD David Bowie without the true and secret knowledge of the Photoshop. For in that time, so shall He appear.
Please, create your own LLMs preferably with sense scaling.

OpenAI really seems to have broken through something recently. The GPTs are insanely good and fun. I've been training three - two for work and one for my personal life and it's starting to feel that there's no task they're not at least competent at.
How would someone with only a rudimentary coding background and access to a pretty standard GPU start training their own LLM/GPT instance? It would be interesting to mess around with that.
 

Chittagong

Gold Member
OpenAI really seems to have broken through something recently. The GPTs are insanely good and fun. I've been training three - two for work and one for my personal life and it's starting to feel that there's no task they're not at least competent at.
This is interesting. What kind of things do you use them for? Is that different of having a thread? I have a Cooking Thread, a Random Questions Thread, a Health Thread, a Berserk Thread, a Brand Strategy Thread and so on. The drawback with those is that threads started with older models of GPT forget things that are more than a couple of months old, even if they are searchable in the thread.
 

RJMacready73

Simps for Amouranth
OpenAI really seems to have broken through something recently. The GPTs are insanely good and fun. I've been training three - two for work and one for my personal life and it's starting to feel that there's no task they're not at least competent at.
Do you mind, can I ask what exactly it is your training them to do? Trying to understand and wrap my head around exactly what you can get these things to do outside of major corporate shenanigans
 
How would someone with only a rudimentary coding background and access to a pretty standard GPU start training their own LLM/GPT instance? It would be interesting to mess around with that.
I haven't coded since college, over 20 years ago. The OpenAI toolset lets you converse with a chatbot creator and feed it data, such as Excel files. You just go and use it and then every time an edge case comes up, you talk to it some more to refine it. It's amazingly satisfying.
 
Last edited:
This is interesting. What kind of things do you use them for? Is that different of having a thread? I have a Cooking Thread, a Random Questions Thread, a Health Thread, a Berserk Thread, a Brand Strategy Thread and so on. The drawback with those is that threads started with older models of GPT forget things that are more than a couple of months old, even if they are searchable in the thread.
Yeah, they're not threads but specific chatbots that are your personal programmes to either keep or share through a link (to other paying users). They are connected to your login.

I have one to deliver financial news to me whenever I ask, but over time I have trained it as to what kind of news I'm looking for and covering which assets etc. It also breaks down implications in the areas I'm interested in over 3 days, 3 months, and then 9 months. It isn't afraid to speculate if I ask it to. I've also been talking to it about which geopolitical risk types I want to link this speculation up to.

I have another bot that helps me write reports and other types of composition. I've fed it about 100 examples of my day job writing so it can understand my tone and style. Over time I've trained certain cliches out of it, as well as told it in what context the tone can be more casual, etc. I have also fed it about 150 pieces of freelance work and once 2024 starts and those contracts start up again I plan to see what it can do in this area.

The one for my household is just for fun. I've fed it years of my personal, my wife's, and then our household budget in the form of Excel sheets and then asked it questions about how our food cost increases compare to inflation, what months we break our average spending and in what categories, stuff like that. You can even make it draw charts and stuff.

I'm very excited about where this ends up in a few years.
 
Last edited:
Do you mind, can I ask what exactly it is your training them to do? Trying to understand and wrap my head around exactly what you can get these things to do outside of major corporate shenanigans
See above.
My composition chatbot helped my wife with a cover letter and CV/resume the other day, as well. I wanted to feed it her inbox and then ask it to highlight all the KPIs she met over the year but she was worried about privacy issues.
 

Chittagong

Gold Member
Yeah, they're not threads but specific chatbots that are your personal programmes to either keep or share through a link (to other paying users). They are connected to your login.

I have one to deliver financial news to me whenever I ask, but over time I have trained it as to what kind of news I'm looking for and covering which assets etc. It also breaks down implications in the areas I'm interested in over 3 days, 3 months, and then 9 months. It isn't afraid to speculate if I ask it to. I've also been talking to it about which geopolitical risk types I want to link this speculation up to.

I have another bot that helps me write reports and other types of composition. I've fed it about 100 examples of my day job writing so it can understand my tone and style. Over time I've trained certain cliches out of it, as well as told it in what context the tone can be more casual, etc. I have also fed it about 150 pieces of freelance work and once 2024 starts and those contracts start up again I plan to see what it can do in this area.

The one for my household is just for fun. I've fed it years of my personal, my wife's, and then our household budget in the form of Excel sheets and then asked it questions about how our food cost increases compare to inflation, what months we break our average spending and in what categories, stuff like that. You can even make it draw charts and stuff.

I'm very excited about where this ends up in a few years.
That’s super cool. I work both in VC and creative, so that would certainly be useful for doing LP reports and the kind.

In the creative agency, we produce a lot of content using GPT, just feeding it the basics of what we want to say. The problem there is that default GPT has a language style that is two grandiose and dramatic, everything is like ‘in the magnificent waters surrounding the majestic mount Rotui…’. Maybe I will feed it a bunch of copy I enjoy and see if it gets better.
 
That’s super cool. I work both in VC and creative, so that would certainly be useful for doing LP reports and the kind.

In the creative agency, we produce a lot of content using GPT, just feeding it the basics of what we want to say. The problem there is that default GPT has a language style that is two grandiose and dramatic, everything is like ‘in the magnificent waters surrounding the majestic mount Rotui…’. Maybe I will feed it a bunch of copy I enjoy and see if it gets better.
It definitely gets better if you feed it work. It also highlights your cliches and go-to catchphrases, which can be uncomfortable. Mine even chided me once about some of my writing being lazy.
 

sharp weiner

Neo Member
How would someone with only a rudimentary coding background and access to a pretty standard GPU start training their own LLM/GPT instance? It would be interesting to mess around with that.
I am trying to figure out the same thing! Been for about a year, background is 10 years electrical engineering and thinking about robots a lot and simplified language model brains mixed with reinforcement learned mobility and sensory input like vision hearing motion. But also digging into LLMs a bit, i believe that they are trained on words and probabilities at many layers of strings/words, and what occurs next.

To me, that is just one dimensional digital interpretation of words and probabilities. Before electrical engineering I played baseball in college and had to go to class too so chose communications cuz we got to watch movies. But any communications student would learn that words have multiple dimensions based on specific inputs. Facial expressions when spoken usually coincide with voice shifts. If it is a man or woman speaking or listening. We learned about the studies of children and differences in communication styles. Girls preferred rule followers and boys preferred rule breakers. Lots of ways to bias the words, turn a tic tac toe puzzle (mighty as LLMs are) into a multidimensional word encoding. Something akin to Rubik’s cube. Based on each cube in the row being a 1/3 knob. 4 values. 0 33 66 100. But 66 isn’t 50! Agreed, but I just say the cube can be cut into 16 cubes instead of 9. I am real high right now so I am just writing this to remember it. Anyways those are the kinds of training on LLMs I would like to look into and I know that Hughing Face and WeightsAndBiases are a good way to start learning how to make em.
 

Sakura

Member
Just because Chat GPT says something doesn't mean it is true. It shouldn't actually know what the API being used is called, so it is likely just hallucinating/guessing.
For example I am using GPT 4 Turbo (API gpt-4-1106-preview) but if I ask it what version of GPT it is, it tells me it is GPT 3.
I feel like the people claiming it is a stealth release of 4.5 turbo are jumping the gun a bit.
 

E-Cat

Member
Just because Chat GPT says something doesn't mean it is true. It shouldn't actually know what the API being used is called, so it is likely just hallucinating/guessing.
For example I am using GPT 4 Turbo (API gpt-4-1106-preview) but if I ask it what version of GPT it is, it tells me it is GPT 3.
I feel like the people claiming it is a stealth release of 4.5 turbo are jumping the gun a bit.
Yah, people are way too gullible and thirsty for GPT-4.5.

This post from an OpenAI employee confirms it’s a hallucination.

 

Chittagong

Gold Member
If you use GPT a lot it’s super clear when you hit a new instance.

I believe they do a lot of a/b with instances, some dumber and some smarter. For example, sometimes when you start a chat and paste your food diary and ask it to calculate macros, it goes to a code window and ruminates a long time and fails. Just after that in a new chat it might just spit out the answer super fast.

Relevant:



I tried this and this is what I got:


VeNTIZV.jpg
 
Last edited:

Sakura

Member
If you use GPT a lot it’s super clear when you hit a new instance.

I believe they do a lot of a/b with instances, some dumber and some smarter. For example, sometimes when you start a chat and paste your food diary and ask it to calculate macros, it goes to a code window and ruminates a long time and fails. Just after that in a new chat it might just spit out the answer super fast.

Relevant:



I tried this and this is what I got:


VeNTIZV.jpg


"Chat GPT told me it is 4.5 turbo" isn't very good evidence. It will write convincing sounding text to defend it too, that is how LLMs work. You can't take it at its word.
You are also being selective in what you choose to believe, and guiding it via prompts (whether you realise it or not) to give you false answers. In the first response in your OP, it told you it doesn't know what API is being used (because it doesn't), and then you ignored that and told it to give you an answer anyway, so it made one up. 3.5 turbo is a thing, 4 is a thing, 4 turbo is a thing, so saying 4.5 turbo sounds like a logical answer.

People keep asking GPT the precise API name being used, but Chat GPT doesn't know the API name. It has no reason to know the name of the API. It knows it is some version of GPT4 based on the system prompt from Open AI (probably something like "You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.").

Furthermore, as posted by E-Cat above, an Open AI employee has already said it is a hallucination. But I guess people will just believe what they want to believe.
 

E-Cat

Member
Furthermore, as posted by E-Cat above, an Open AI employee has already said it is a hallucination. But I guess people will just believe what they want to believe.
Further evidence from two additional OpenAI employees:



People's expectations are too low...
 

Tams

Member
I've only used it recently to review a cover letter (for the job I now have!).

That was only the free version and untrained, but it still did a very good job. Better than the other humans I had have a look over it too in some ways.
 
Top Bottom