• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AI processors are coming: Would you be comfortable with Seaman on your console deep learning summer lessons?

onQ123

Member
If you haven't figured it out yet Next Generation is going to be big on AI/Deep Learning / Machine Learning / Neural Network & so on & most likely the next generation consoles will come with AI processors/accelerators like the DNA 100 processor from Cadence & Tensor Cores from Nvidia.

With that said how do y'all feel about games studying you as you play them & reacting to your input , remembering things that you said & bringing it up randomly days , weeks , months or even years later?










https://ip.cadence.com/news/611/330...ower-Efficiency-for-On-Device-AI-Applications

Cadence Launches New Tensilica DNA 100 Processor IP Delivering Industry-Leading Performance and Power Efficiency for On-Device AI Applications
DNA 100 processor easily scales from 0.5 to 100s of TMACs for neural network inferencing in automotive, surveillance, robotics, drones, AR/VR, smartphone, smart home and IoT products


The DNA 100 processor will be available to select customers in December 2018 with general availability expected in the first quarter of 2019
 

Skyfox

Member
I don’t mind “prebaked” AI features (like NPC animation movement effects etc.), but this reminds me of the mandatory Kinect for Xbox one.

Luckily we have companies like Nintendo who are justifiably reluctant when it comes to the internet and their customers.

The problem here is engineers who have no respect for social decency/right to privacy.
 
Last edited:
H

hariseldon

Unconfirmed Member
about as amazing as cloud computing was going to revolutionize this gen.

Yeah that was horse-shit, only ever intended, had it worked, as a form of DRM. Of course once the server goes so does the game, which really wouldn't have been good for software preservation or general consumer rights.
 

onQ123

Member
Project-Trillium-840x472.jpg

https://www.androidauthority.com/arm-unveils-new-npu-837015/

There has been quite a lot written about Neural Processing Units (NPUs) recently. An NPU enables machine learning inference on smartphones without having to use the cloud. Huawei made early advances in this area with the NPU in the Kirin 970. Now Arm, the company behind CPU core designs like theCortex-A73 and the Cortex-A75, has announced a new Machine Learning platform called Project Trillium. As part of Trillium, Arm has announced a new Machine Learning (ML) processor along with a second generation Object Detection (OD) processor.



The ML processor is a new design, not based on previous Arm components and has been designed from the ground-up for high performance and efficiency. It offers a huge performance increase (compared to CPUs, GPUs, and DSPs) for recognition (inference) using pre-trained neural networks. Arm is a huge supporter of open source software and Project Trillium is enabled by open source software.

The first generation of Arm’s ML processor will target mobile devices and Arm is confident that it will provide the highest performance per square millimeter in the market. Typical estimated performance is in-excess of 4.6TOPs, that is 4.6 trillion (million millions) operations per second.

Project-Trillium-Arm-ML-and-OD-processors-840x472.jpg

The final design for the ML processor will be ready for Arm’s partners before the summer and we should start to see SoCs with it built-in sometime during 2019. What do you think, will Machine Learning processors (i.e. NPUs) eventually become a standard part of all SoCs? Please, let me know in the comments below.


winmlonia.png



Windows_ML1-42139476fa27ee42.jpeg
 

onQ123

Member
The moment you realize that these crazy patents are closer then you thought


United States Patent10,112,113KrishnamurthyOctober 30, 2018

Personalized data driven game training system

Abstract
A video game console, a video game system, and a computer-implemented method are described. Generally, a video game and video game assistance are adapted to a player. For example, a narrative of the video game is personalized to an experience level of the player. Similarly, assistance in interacting with a particular context of the video game is also personalized. The personalization learns from historical interactions of players with the video game and, optionally, other video games. In an example, a deep learning neural network is implemented to generate knowledge from the historical interactions. The personalization is set according to the knowledge.

Inventors: Krishnamurthy; Sudha (Foster City, CA)Applicant: NameCityStateCountryType

Sony Computer Entertainment Inc.

Tokyo

N/A
JP​
Assignee: SONY INTERACTIVE ENTERTAINMENT INC. (Tokyo, JP)
Family ID: 59958447Appl. No.: 15/085,899Filed: March 30, 2016



zUQ26c.jpg


United States Patent10,155,166Taylor , et al.December 18, 2018

Spatially and user aware second screen projection from a companion robot or device

Abstract
A system is provided, including the following: a computing device that executes a video game and renders a primary video feed of the video game to a display device, the primary video feed providing a first view into a virtual space; a robot, including, a camera that captures images of a user, a projector, and, a controller that processes the images of the user to identify a gaze direction of the user; wherein when the gaze direction of the user changes from a first gaze direction that is directed towards the display device, to a second gaze direction that is directed away from the display device, the computing device generates a secondary video feed providing a second view into the virtual space; wherein the controller of the robot activates the projector to project the secondary video feed onto the projection surface in the local environment.

Inventors: Taylor; Michael (San Mateo, CA), Stafford; Jeffrey Roger (Redwood City, CA)Applicant: NameCityStateCountryType

Sony Interactive Entertainment Inc.

Tokyo

N/A
JP​
Assignee: Sony Interactive Entertainment Inc. (Tokyo, JP)
Family ID: 63722767Appl. No.: 15/700,005Filed: September 8, 2017


Furthermore, in some implementations, the robot employs gaze prediction so that it can move to the optimal location before the user looks in a direction. For example, in some implementations, the robot is fed metadata from the video game that indicates the direction of future game content. The robot may then predictively move to various locations as necessary in order to be optimally positioned to project the secondary view. In some implementations, the robot employs a neural network or other machine learning construct that predicts the future gaze direction of the user, and the robot may move accordingly based on the predicted future gaze direction.

K1F3gR.jpg



7z0Mc1.jpg







 
H

hariseldon

Unconfirmed Member
Oh wait this is the guy who doesn't know what VR is isn't it. Ok in that case I'll just point and laugh.
 

onQ123

Member
SNK Hinting at Using Revolutionary Neural Network AI For Samurai Shodown and Future Titles



While it has been quite a while since we have heard anything about the upcoming Samurai shodown revival game, which is coming out in mere months, an interesting news surfaced today from SNK which might hint at something revolutionary coming to the sword fighting game.

Announced was made today by the Osaka based company, that confirms SNK’s participation in the March 30th Game Creators Conference 2019, which will be held in Osaka. This participation will be in the form of a panel titled: “The Incorporation of Neural Network based AI into Fighting Games”. the panel will be presented and hosted by Nigo Nobuaki from SNK’s R&D Department. The following is a summary of the subject by Nigo San himself:

Nigo Nobuaki
In a fighting game, we realized a system that learns and reproduces the movement of a character using a neural network on a game machine. You can reproduce the behavior of that player by using the situation the actual player is watching and the operation you performed. We introduce an example of TensorFlow connection that can test the machine learning algorithm at high speed as a prototype in the game constructed by UE 4 and show how to change the implementation in a form suitable for actual product incorporation I will explain. Finally, I will tell you about the difficulties in incorporating machine learning in products, what I felt was first to consider.

The official announcement article, from the official Game Creators Conference 2019 website, comes with a screenshot from Samurai Shodown, which makes it clear that this technology is probably been used for the title.

What do you think about SNK using some state of the art AI tech from the upcoming Samurai Shodown, and possibly KOFXV? Let us know in the comments section below!
 

dirthead

Banned
Kind of shows you how clueless modern SNK is. No one ever cared much about AI in fighting games. The problem with the new Samurai Shodown is shit production values/bad art direction, not AI.
 

StreetsofBeige

Gold Member
"Deep Learning AI"

Let's see how much improvement there is vs. game makers claiming the AI in their games so improved over last year's game.

1980s: Enemy AI stand in the open
1990s: Enemy AI stand in the open
2000s: Some AI stand in the open + some crouch behind a garbage can taking cover, but almost always still exposing their head or hunched back
2010s: Some AI stand in the open + some crouch behind a garbage can taking cover, but almost always still exposing their head or hunched back

Advanced Neural Net Dynamic Ultimate AI 2020s+: Same as 2010, but AI is smarter by putting on more armour instead of fighting wearing tshirts, so it will take triple the bullets to kill. Any enemy already wearing armour now puts on more armour making them spongier.
 
Last edited:

StreetsofBeige

Gold Member
Kind of shows you how clueless modern SNK is. No one ever cared much about AI in fighting games. The problem with the new Samurai Shodown is shit production values/bad art direction, not AI.
Regarding fighting game AI, I remember Virtua Fighter 2 or 3 where Sega claimed the AI adjusted to your fighting skills and moves. Absolute BS.
 

joe_zazen

Member
Not for me. Addiction is super profitable, tech company leaderships know this, so like casino and tobacco executives, they want their products to make more and more addicts. AI that knows me better then myself, owned and designed by sociopathic greedycorporations...shudder.

So yeah, anything that allows them to increase the potential addictiveness of their products is something I dont want in my life.
 
Last edited:

mango drank

Member
I'm curious about what AI / NNs can do for games, but not so much about its benefits to NPC AI, adaptive difficulty level, personalization etc. I'm more interested in what it can do for improving character models, environments, textures, resolution, effects, animation, that sort of thing. So more on the game dev + performance side. Since we're getting diminishing returns in terms of raw hardware power, maybe AI can start picking up the slack. Kinda like temporal AA and checkerboard 4k--I wonder what else AI can fake?
 

onQ123

Member
I have a feeling that both new consoles will come with a large amount of ReRam 32GB - 128GB


ReRAM enhances edge AI



Today, more than 30 companies are developing dedicated AI hardware to achieve the greater efficiencies required for these specialized computing tasks in smartphones, tablets, and other edge devices.

Analysts have predicted the global AI chip market will grow at a compound annual growth rate of about 54% between 2017 and 2021. The need for powerful hardware that can handle the demands of machine learning is a key driver to this growth.




Removing the memory bottleneck

All AI processors rely upon data sets, which represent models of the “learned” object classes (images, voices, etc.), to perform their recognition feats. Each object recognition and classification requires multiple memory accesses. The biggest challenge facing engineers today is overcoming memory speed and power bottlenecks in current architectures to get faster data access, while lowering the energy cost of that access.

The greatest speed and energy efficiency can be gained by placing training data as close as possible to the AI processor core. But the storage architecture employed by today’s designs, created several years ago when there were no other practical solutions, is still the traditional combination of fast but small embedded SRAM with slower but large external DRAM. When trained models are stored this way, the frequent and massive movements of data between embedded SRAM, external DRAM, and the neural network increase energy consumption and add latency. Further, SRAM and DRAM are volatile memories, limiting the ability to achieve power savings during sleep periods.

edgeReRAM2.jpg


Figure 1 Memory at the center of an AI architecture.

Much greater energy efficiencies and speeds can be achieved by storing the entire trained model directly on the AI processor die with low-power, non-volatile memory that is dense and fast. By enabling a new memory-centric architecture, the entire trained model or knowledge base could then be on-chip, connected directly to the neural network, with the potential for massive energy savings and performance improvements, resulting in greatly improved battery life and a better user experience. Today, several next-generation memory technologies are competing to accomplish this.



ReRAM’s potential

The ideal non-volatile embedded memory for AI applications would be very simple to manufacture, easy to integrate in the back-end-of-line of well-understood CMOS processes, easily scaled to advanced nodes, available in high volume, and able to deliver on the energy and speed requirements for these applications.

Resistive RAM (ReRAM) has a much greater ability to scale than magnetic RAM (MRAM) or phase-change memory (PCM) alternatives, an important consideration when looking at 14, 12, and even 7 nm process nodes. These other technologies require more complex and expensive manufacturing processes and more power to operate than ReRAM.

edgeReRAM1.jpg



Figure 2 ReRAM can fill a memory-technology gap.

The nanofilament technology of Crossbar’s ReRAM for instance enables scaling below 10 nm without impacting performance. ReRAM is based on a simple device structure using CMOS-friendly materials and a standard manufacturing process that can be easily integrated with and manufactured on existing CMOS fabs. As it is a low-temperature, back-end-of-line process integration, multiple layers of ReRAM arrays can be integrated on top of CMOS logic wafers to build 3D ReRAM storage.

AI needs the best performance per watt, and this is especially true when applied to power-limited edge devices. ReRAM has demonstrated energy efficiency five times greater than that of DRAM – as many as 1,000 bit reads per nanojoule – while exhibiting better overall read performance than DRAM – up to 12.8 GB/s with less than 20 ns of random latency.



Memory-centric architectures

Scientists are already exploring a variety of novel brain-inspired paradigms to achieve much greater energy efficiencies by imitating the way neurons and synapses of the central nervous system interact. Artificial synapses based on ReRAM technology are a very promising method for enabling these high-density and ultimately scaled synaptic arrays in neuromorphic architectures. ReRAM has the potential to play a significant role in both current and radically new approaches to AI by enabling AI at the edge.
 

onQ123

Member
I wonder if anyone will make games that's mostly voice controlled,

Jeopardy & other games like that should at lest try to use AI to figure out if your answer is acceptable .
 

onQ123

Member


Sony Announces the Establishment of Sony AI
with the mission to unleash human creativity


Tokyo, Japan - Sony Corporation ("Sony") today announced the establishment of Sony AI. This new organization, with offices globally in Japan, Europe, and the United States, will advance fundamental research and development of AI (artificial intelligence).

Sony's Purpose is to "Fill the world with emotion, through the power of creativity and technology." Recognizing that AI will play a vital role in the fulfillment of this Purpose, Sony AI is being established with the mission to "unleash human imagination and creativity with AI.

Sony AI will combine world class fundamental research and development with Sony's unique technical assets, especially in Imaging & Sensing Solutions, Robotics and Entertainment (Games, Music and Movies), driving transformation across all existing business domains and contributing to the creation of new business domains. In addition, one of Sony AI's long-term goals is to contribute to the resolution of shared global issues extending beyond Sony's business domains.

Sony AI will drive the research and development of AI in both physical and virtual space through multiple world-class flagship projects as well as other explorative research projects, including AI ethics.

Initially, Sony AI will launch three flagship projects in the areas of gaming, imaging & sensing, and gastronomy. The adoption of new AI technologies developed through these flagship projects will be critical to further enhancing the value of Sony's gaming and sensor businesses in coming years. This research will be pursued in close collaboration with the relevant Sony Group business units.

In order to drive these projects and achieve truly innovative research, Sony is eager to work with top global AI talent with an aim to attract world-class AI researchers and engineers. Sony believes that extraordinary innovation requires diversity of both talent and approaches, and this will be reflected in the composition and operation of Sony AI. Recognizing the power and influence of AI technologies, Sony AI will contribute to society through the development of AI that is fair, transparent, and accountable.

Sony AI will be headed globally by Hiroaki Kitano (President and CEO, Sony Computer Science Laboratories, Inc.; Corporate Executive, Sony Corporation), and the American site will be headed by Peter Stone.
 

onQ123

Member


DirectML – Xbox Series X supports Machine Learning for games with DirectML, a component of DirectX. DirectML leverages unprecedented hardware performance in a console, benefiting from over 24 TFLOPS of 16-bit float performance and over 97 TOPS (trillion operations per second) of 4-bit integer performance on Xbox Series X. Machine Learning can improve a wide range of areas, such as making NPCs much smarter, providing vastly more lifelike animation, and greatly improving visual quality.




Machine learning is a feature we've discussed in the past, most notably with Nvidia's Turing architecture and the firm's DLSS AI upscaling. The RDNA 2 architecture used in Series X does not have tensor core equivalents, but Microsoft and AMD have come up with a novel, efficient solution based on the standard shader cores. With over 12 teraflops of FP32 compute, RDNA 2 also allows for double that with FP16 (yes, rapid-packed math is back). However, machine learning workloads often use much lower precision than that, so the RDNA 2 shaders were adapted still further.


"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."
 
Last edited:

onQ123

Member
Hello PlayStation




Automatic dialogue design

Abstract
A chatbot learns a person's related "intents" when asking for information and thereafter, in response to an initial query, which the chatbot answers, the chatbot generates a secondary dialogue, either providing the person with additional information or inquiring as to whether the person wishes to know more about a subject. The chatbot may use an external trigger such as time, event, etc. and automatically generate a query or give information to the person without any initial query from the person.

Description


FIELD

The application relates generally to chatbots automatically designing dialogs.

BACKGROUND

Apple Siri.RTM., Microsoft Cortana.RTM., Google Assistant.RTM., Amazon Alexa.TM. and Line Corporation Clova.TM. are examples of "chatbots" that audibly respond to spoken queries from people to return answers to the queries. The term "chatbot or bot" as used herein refers to a program (or the entire system including it) that performs dialogue communication on behalf of humans. A dialogue may be a combination of an utterance (such as a query) from a person and a response from the chatbot to the utterance. The intent of the dialogue in these systems is that initiated by the person and is based on the subject of the utterance. In this context, "intent" refers to categorizing what kind of intention the utterance of the person has. The chatbot responds to the person-defined intent appropriately. To this end, "entity" refers to categorizing meaningful words in a person's utterance after recognizing the person's intent.

SUMMARY

As understood herein, the intent of a dialogue helpfully may be preemptively established by the chatbot instead of the person, to better assist the person in obtaining possibly relevant, interesting, or important information.

Accordingly, a device includes at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor to receive an utterance from a person. The instructions are executable to access a data structure based on the utterance to retrieve a response to the utterance and to display the response. The instructions further are executable to, based at least in part on the utterance, generate a secondary dialog and automatically play the secondary dialogue without any further prompt from the person apart from the utterance.

In example embodiments, the response is audibly displayed, in which case the device can include one more speakers for playing the response. In addition, or alternatively, the response may be visibly displayed, in which case the device can include one or more displays for presenting the response.

In some implementations, the instructions may be executable to correlate utterances from the person to generate a data structure of learned correlations, and to generate the secondary dialogue based at least in part on the learned correlations. In example embodiments, the instructions can be executable to identify at least one trigger that is not based on an utterance by the person, and to, responsive to the trigger, generate the secondary dialogue.

In another aspect, an apparatus includes at least one processor and at least one computer storage with instructions executable by the processor to receive a trigger generated by a person. The instructions are executable to access a data structure based on the trigger to retrieve a response to the trigger, and to display the response. The instructions are further executable to generate a secondary dialogue and to automatically play the secondary dialogue without any further prompt from the person apart from the trigger.

In another aspect, a method includes receiving a trigger generated by a person, accessing a data structure based on the trigger to retrieve a response to the trigger, and displaying the response. The method also includes generating a secondary dialogue and automatically playing the secondary dialogue without any further prompt from the person apart from the trigger.

The details of the present application, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
 
Remember ppu ? I think this is the next ageia at best. And ia for games will be like physics.... Mostly pr and demo at launch of product.

Ageia. Not heard that name in awhile.

What was that third person futuristic shooter they were marketing with Ageia physics...late 00's?
 
Last edited:

ZywyPL

Banned
I'm really rooting for AI image upscaling, as it can bring better results than native image, and on top of that without the need of image blurrying post-processing AA. As far a NPCs go, hopefully this won't happen, because the AI would reach a point where it's unbeatable, like the AI in DOTA2 or SC2 lol.
 

D.Final

Banned
If you haven't figured it out yet Next Generation is going to be big on AI/Deep Learning / Machine Learning / Neural Network & so on & most likely the next generation consoles will come with AI processors/accelerators like the DNA 100 processor from Cadence & Tensor Cores from Nvidia.

With that said how do y'all feel about games studying you as you play them & reacting to your input , remembering things that you said & bringing it up randomly days , weeks , months or even years later?










https://ip.cadence.com/news/611/330...ower-Efficiency-for-On-Device-AI-Applications



Holy God
 
So does Tom's Hardware actually have official access to PS5 deep-learning info/documentation from Sony direct (which would make this a sponsored kind of thing more or less), or is this just them speculating on MLID and RedTech rumors that prob originated on 4Chan even earlier than that?

If it's the latter I'll kindly skip out. Any "secret sauce" hardware with these systems, needs to be officially disclosed this month IMHO. Time to 100% put the focus behind games and services leading up to launch.
 

onQ123

Member
So does Tom's Hardware actually have official access to PS5 deep-learning info/documentation from Sony direct (which would make this a sponsored kind of thing more or less), or is this just them speculating on MLID and RedTech rumors that prob originated on 4Chan even earlier than that?

If it's the latter I'll kindly skip out. Any "secret sauce" hardware with these systems, needs to be officially disclosed this month IMHO. Time to 100% put the focus behind games and services leading up to launch.

Probably talking about the patent or something
 

psorcerer

Banned
So does Tom's Hardware actually have official access to PS5 deep-learning info/documentation from Sony direct (which would make this a sponsored kind of thing more or less), or is this just them speculating on MLID and RedTech rumors that prob originated on 4Chan even earlier than that?

If it's the latter I'll kindly skip out. Any "secret sauce" hardware with these systems, needs to be officially disclosed this month IMHO. Time to 100% put the focus behind games and services leading up to launch.

Fake news.
There is no "deep learning technology".
DL is software and datasets.
 

Aladin

Member
You wont have machine learning cores in gaming gpu cards or consoles. Because for any meaningful ML you need lots of data, and hence done at the developer end and learned formula/neural network is implemented in game code.
 

onQ123

Member
You wont have machine learning cores in gaming gpu cards or consoles. Because for any meaningful ML you need lots of data, and hence done at the developer end and learned formula/neural network is implemented in game code.

You only need the GPU with 16-bit , 8-bit & 4-bit precision for ML after you do the training in the NN.
 
Fake news.
There is no "deep learning technology".
DL is software and datasets.

Software and datasets are massive to ML, true, but having hardware dedicated and tuned to ML applications certainly works better than generically running the dataset models on general CPUs or GPUs.

So that's kind of what is meant when referring to "deep learning technology", I guess.

Probably talking about the patent or something

Why bother, then? Everyone else has talked about it by now, and at the end of the day...it's still just a patent.

It's a bit frustrating the selective bias some of these techies have when it comes to discussing Sony and MS technologies in gaming systems; everyone's suddenly jumping on ML/DLSS train now because of a Sony patent meanwhile MS has outright confirmed ML months ago.

Even worst, some of the people focusing on ML now were downplaying it when there was no patent, seemingly because ML discussion was focused moreso around Series X at the time. I don't think saying "well the PS brand is more popular so therefore it'd obviously get more focus on this feature" doesn't float with technical people who are meant to be more neutral in terms of how they put out their discussion.

There's a psychological aspect at work there, too, where you can essentially create the impression of "absence through obfuscation & ignorance", it's used a lot in driving narratives in long-form discussions. Basically, it feels like some of these tech-focused folks are trying to paint the impression Sony is the only one doing these kind of customizations, and that actually feeds back into the false narrative of "optimized elegance vs. brute force" and just painting Series X as a "PC in a box". It's a way of trying to make Sony look more exotic than they actually are by choosing not to focus on MS's efforts in these areas and the fact that they, too, have a lot of technical patents for some pretty creative things that could be applicable to the Series systems.

It just rubs me off as very disingenuous, need for traffic/clicks be damned.
 
Last edited:

psorcerer

Banned
Software and datasets are massive to ML, true, but having hardware dedicated and tuned to ML applications certainly works better than generically running the dataset models on general CPUs or GPUs.

So that's kind of what is meant when referring to "deep learning technology", I guess.

They've got all the low precision formats at better speeds. It's all the "tech" needed...
 
Top Bottom