Welcome to Perceptron, TechCrunch’s weekly summary of AI news and research around the world. Machine learning is now an important technology in virtually every industry, and too much is happening for everyone to catch up with it. This column aims to collect some of the most interesting recent discoveries and papers in the field of artificial intelligence and explain why they are important.
(Formerly known as Deep Science. Check out the previous editions here.. )
This week’s summary begins with two advanced studies by Facebook / Meta. The first is a collaborative study with the University of Illinois at Urbana-Champaign, which aims to reduce emissions from concrete production. Concrete accounts for about 8% of carbon emissions, so even the slightest improvement can help achieve climate goals.
What the Meta / UIUC team did was to train the model with over 1,000 concrete formulas with different proportions of sand, slag, ground glass and other materials (see above for more photogenic concrete sample chunks). can do). By finding the subtle trends in this dataset, we were able to output a number of new equations that optimize both intensity and low emissions. Victory formula Emissions turned out to be 40% less than local standards, … well, Several Of strength requirements. It is very promising and follow-up in the field should move the ball again soon.
The second meta-study has to do with changing the way Language model work. The company wants to work with neural imaging experts and other researchers to compare how language models compare to actual brain activity during similar tasks.
In particular, I am interested in the human ability to predict words far ahead of the current word while speaking and listening. For example, if you know that a sentence ends in a particular way, or that “but” comes. The AI model is much better, but it still works primarily by adding words one by one, like Lego blocks, and sometimes looking back to see if it makes sense.They are just getting started, but they already have Some interesting results..
Returning to the material tips, researchers at Oak Ridge National Laboratory are working on the fun of formulating AI. The team used a dataset of quantum chemistry calculations to create a neural network that could predict material properties, whatever they were, but inverted it, Enter properties and have them suggest materials..
“Rather than taking a material and predicting its specific properties, we wanted to select the ideal properties for our purposes and go back to design those properties reliably, quickly and efficiently. This is known as inverse design, “says ORNL’s Victor Fung.It seems to work, but you can check it yourself by running it Github code..
Interested in physical predictions on a completely different scale, This ETHZ project Data from ESA’s Copernicus Sentinel-2 satellite (for optical images) and NASA’s GEDI (Orbital Laser Distance Measurement) are used to estimate canopy heights around the world. Combining the two in a convolutional neural network creates an accurate world map of tree heights up to 55 meters high.
As NASA’s Ralph Dubaya explains, the ability to conduct regular surveys of this type of biomass on a global scale is important for climate monitoring. You need a good global map of where the trees are. Because every time you cut a tree, you release carbon into the atmosphere and you don’t know how much carbon you are releasing. “
You can easily Browse map format data here..
The DARPA project is also about creating a very large simulation environment for virtual self-driving cars to pass through. They signed a contract with IntelYou could have saved money by contacting the game maker, Snow runnerThis basically does what DARPA wants for $ 30.
The goal of RACER-Sim is to develop an off-road AV that already knows what it’s like to run through rocky deserts and other harsh terrain. The four-year program first focuses on creating the environment, building the model in the simulator, and then transferring the skill to the physical robot system.
In the field where there are currently about 500 AI drugs, MIT has a sane approach With a model that suggests only molecules that can actually be made. “Models often suggest new molecular structures that are difficult or impossible to generate in the laboratory. If a chemist is unable to actually make a molecule, testing its disease-fighting properties is not possible. can not.”
The MIT model “guarantees that the molecule is composed of affordable materials and that the chemical reactions that occur between those materials follow the laws of chemistry.” What Molecule.one doesHowever, it is integrated into the detection process. It would certainly be great to know that the miraculous medicine your AI is proposing does not require fairy dust or other exotic matter.
Another job at MIT, the University of Washington, etc. is to teach robots to interact with everyday objects. Some of us don’t have a dishwasher, so we hope it will become commonplace in the coming decades. The problem is that you can’t faithfully relay data to train your model, so it’s very difficult to tell exactly how people interact with objects. Therefore, many data annotations and manual labeling are involved.
New technology With a focus on observing and inferring 3D geometry so closely, you can learn how to do it in just a few cases where the system is grabbing an object. A simulator can typically require hundreds of examples or thousands of iterations, but effective manipulation of this object required a demonstration by 10 people per object.
With this minimal training, we have achieved an 85% success rate, which is far superior to the baseline model. Currently limited to a few categories, researchers hope to be able to generalize it.
There are some at the end of this week Promising work from Deepmind A multimodal “visual language model” that combines visual and linguistic knowledge so that ideas like “three cats sitting on a fence” have a kind of crossover representation between grammar and images. “about. After all, that’s how our own mind works.
Their new “general purpose” model, flamingos, can not only visually identify, but also interact with each other. This is not because the two models are one, but because of the fusion of language and visual understanding. As we have seen from other laboratories, this kind of multimodal approach produces good results, but it is still very experimental and computationally intensive.