Well, you heard it here first: Nvidia admits AI models generally lack "common sense." I mean, obviously this is far from some earth-shattering revelation—anyone with eyes on this news story knows not to put glue in their pizza sauce, but that hasn't stopped Google's AI overview from suggesting you add a dollop to make it extra sticky. So, how is Nvidia attempting to remedy this? With human tutors, naturally.
In a recent blog post, Nvidia detailed how its data factory team is approaching teaching generative AI knowledge about the world that humans take for granted. Made up of analysts hailing from "various backgrounds including bioengineering, business, and linguistics," the team is "working to develop, analyze and compile hundreds of thousands of data units" in the hopes of teaching Nvidia's AI models how to make the metaphorical pizza sauce.
Cosmos Reason is the AI model that Nvidia is hoping will lead the charge. The company explains, "Cosmos Reason is unique compared with previous [vision language model or VLM] as it’s designed to accelerate physical AI development for fields such as robotics, autonomous vehicles and smart spaces. The model can infer and reason through unprecedented scenarios using physical common-sense knowledge."
That's a big claim, so how has Nvidia managed that? Well, via the considerably less impressive form of a series of multiple choice questions—like a pop quiz for AI.
Nvidia writes, "It all starts with an NVIDIA annotation group that creates question-and-answer pairs based on video data."
Ah, that's the 'visual' bit of 'visual language model.' In the case of an example video depicting someone cutting fresh spaghetti, a human annotator asks the AI which hand is used to chop the strands of destined deliciousness (and that's the 'language' bit). The AI must then choose correctly from four possible answers, which includes 'doesn't use hands' (now that I'd like to see).
Testing the AI like a student might be tested by a teacher with feedback addressing wonky answers, is called Reinforcement Learning. Through many, many rounds of this sort of testing, in addition to a rigorous quality assurance back and forth between the data factory team leads and the Cosmos Reason research team, it's hoped some knowledge of the physical world might just stick in the model.
This is all in aid of developing AI models that can be used to control, say, physical machinery around a factory environment. Nvidia research scientist Yin Cui comments, "Without basic knowledge about the physical world, a robot may fall down or accidentally break something, causing danger to the surrounding people and environment."
Indeed, lacking knowledge about the physical world is a scenario you can see play out repeatedly during the highlight reel out of the World Humanoid Robot Games. Setting humanoid bot mishaps aside for the moment, Amazon has over 1 million human employees working alongside an army of robots that may one day outnumber them. With that in mind, it's easy to see why developing reasoning AI models that can reliably interact with the physical world has so captured big tech's imagination.