Educational tech company, Udacity, whose plans for a self-driving car concept we detailed last edition, promised to open source the complete design. To that end the company has released its self-driving car simulator via open source licence. With a working knowledge of the Unity engine, contributors will be able to create their own tracks to trial their autonomous car software.

Udacity has set developers several challenges, the first of which was to design a 3-D model for a camera mount to support the lens and camera body that can be mounted using standard GoPro hardware.

“Cars are often bumpy, unpredictable – and the data we record must be consistent otherwise hours of driving are rendered useless,” the company explained on its website.
After receiving ‘tons’ of entries, Udacity 3-D printed and tested each one before announcing US undergraduate student, Haden Wasserbaech, as the winner.

Challenge #2, which is currently open, asks participants to teach a car to drive under all sorts of conditions, using a convolutional neural network that takes raw input (camera imagery) and produces a direct steering command. These solutions “are considered the holy-grail of current autonomous vehicle technology, and are speculated to populate the first wave of self-driving cars on roads and highways”, Udacity says.

“By letting the car figure out how to interpret images on its own, we can skip a lot of the complexity that exists in manually selecting features to detect and drastically reduce the cost required to get an autonomous vehicle on the road by avoiding LiDAR-based solutions.”

Challenge #3, which is also open, asks entrants to develop image-only solutions for localisation (telling the car exactly where it is in the world). GPS systems can be inaccurate, depending on the conditions, and are thus unsuitable for autonomous vehicles.

“By processing imagery in real-time and comparing those images to previous drives in the same area, you can actually get a localisation solution that is good enough for use in navigation,” the Udacity brief reads.

“Think of it this way: When you are walking down a street that you’ve traversed several times before, you know where you are because of how close you are to a certain building, intersection, or bridge. This information is all visual, and we can teach computers how to make the same decisions based off of landmarks that they can interpret.”

Challenge #4 is to develop an Android dashboard. “As self-driving vehicles continue to capture the imagination of dreamers, builders, innovators, and engineers across the globe, it’s inevitable that radical new passenger experiences will come to life. Why have a steering wheel (and awkward analogue controls) when level-5 (no human input) autonomy is achieved?” Udacity asks.

Self-driving vehicle software testing is usually conducted in virtual environments. It’s cheaper than building an entire prototype vehicle, and circumvents safety and accessibility concerns of the real world.

The move should drive innovation in the field, and hopefully decrease the time until self-driving cars are viable in the real world.

With human error responsible for 94 per cent of car accidents, removing people from the driving process will save thousands of lives every year. In Australia alone, a 94 per cent reduction in the road toll would see last year’s tally of 1,290 deaths drop to just 78. In the US, where 30,000 die on the roads every year, this would come down to around 1,800.

1 comment

Leave a Reply

Your email address will not be published. Required fields are marked *