Tesla is done with its AI Day Event recently where the company showed its upcoming Full Self-Driving system on a suburban route. During the demo, one driver set a destination on a Tesla Car via the vehicle’s navigation system. Then the driver double-clicked the steering column to confirm the command.
Soon after that, the vehicle started to move towards the destination. During the entire journey, the car interacts with the road signs, traffic lights, and other obstacles. Not to mention, the car successfully avoided all the pedestrians on the road with its advanced AI system.
However, Full Self-Driving Tesla Car is still under development. Elon Musk thinks that the technology will do better and definitely perform better than the average drivers who are driving every day.
“I’m confident that our hardware 3 Full Self-Driving computer 1 will be able to achieve full self-driving at a safety level much greater than a human. At least 200% or 300% better than a human.” – Elon Musk
During the AI Day presentation, Tesla’s artificial intelligence working procedure has been describing shortly to convince average people. You see, the FSD system initially starts working on a three-dimensional vector space. This three-dimensional space is generated by sensing the environment with all the eight cameras of the car. Inputs from all those cameras are processed afterward and create a bird’s eye view where the car remains at the center of it.
All the processed output is visible on the dashboard screen of the car. The car renders every object with simple details and repeats the process. Thus a real-time 3D surrounding of the car is generated. Tesla’s engineers explained that they are using this new method to help the car recognizing and detecting the environment. They are also enhancing the accuracy of the vector space map and precise navigation within.
For example, the AI can cache information so that it can retain the position of vehicles waiting at an intersection. Even though they are blocked by cross-traffic, the system keeps that in mind for further references and precautions. The system also predicts and remembers the position of the vehicle in front of you as well. As a result, if the car loses its vision temporarily due to snow or water splashes on the cameras, or if the radar doesn’t respond for a moment, you will remain safe.
This vector space data and its overall processing is called Neural Net Planner by Tesla. Here a number of AI algorithms handle the routing, trajectory, and behavior of the car on the road while using FSD. The Planner handles turnings, avoiding pedestrians, and every lane change during a commute. The car runs thousands of simulations per minute to generate a complete 3D real-time environment around it. In addition to that, the car also predicts the behavior of other moving elements present on the road i.e vehicles, pedestrians, and so on.
In Tricky Situations
During the event, a situation came where a Tesla FSD car faced another vehicle on a congested narrow road where cars parked on both sides. The system was able to solve the tricky negotiation of whether to wait and let that other car pass or just move and let the other car make way for it via detecting the speed, behavior, and path chosen by the incoming car. In that situation, the other car stopped and allowed the Tesla car to go first.
All of these take place in the car. However, the key to efficiency remains with the training and simulations. Most of the simulations and training are done in Tesla’s data centers. After several practices and training, the FSD software becomes ready to roll in the real roads. During the training process, the software is fed with millions of data so that it can realize and label objects correctly for further interactions.
Accumulation of Data
Tesla is now working on a project called Project Dojo. This project collects and provides simulation data on a large scale to its own silicon-based specifically designed AI training chip. The project starts with the Tesla D1 computing chip. That is tiled to build Dojo units which are further tiled to manufacture ExaPOD processing unit, according to Tesla. To make things understandable that is nearly a processing that needs 30,500 NVIDIA RTX 3090 GPUs to work at a time. That’s when custom AI training comes in handy.
All of that data in that data center will enhance the development of FSD AI in Tesla cars. Not to mention, Tesla Bots and other robotics projects will surely be benefitted in the future. Stay tuned for more updates on Full Self-Driving Tesla Car.