Elektrobit Opens Up New Horizons For Automated Driving

Elektrobit represents one of the most important suppliers of embedded software solutions for the automotive industry. EB is introducing new functions to improve driver assist systems. EB’s electronic horizon combines navigation software and driver assistance software that ultimately offers a smoother experience.

“EB is one of the few automotive suppliers offering an electronic horizon solution that combines both navigation software and driver assistance systems software, thus offering a seamless experience from one single source. EB’s electronic horizon features the most detailed road geometry data currently in the market. It offers the same precision as data used for highway engineering and thus allows for smoother and more accurate driver assistance functions like predictive curve lighting and range determination. The EB Assist Electronic Horizon Solution is also able to deliver this information both to EB’s own driver assistance software development environment and to a wide range of other driver assistance platforms from various supplies. In addition, electronic horizon data can be visualized on EB’s new driver assistance testing tool, the ED Assist Car Data Recorder.“

For the Full Press Release Click Here

The Google Car, a look at some of the technologies that make it go

Last week driverless technology had a big week in the news as Google announced that its self-driving car project has become advanced enough that over the past year they’ve “shifted the focus of the Google self-driving car project onto mastering city street driving”.

Google Tech 1

Eric Jaffe of Atlantic Cities took a ride with Google’s Dimitri Dolgov, the car’s software lead and in his article Jaffe described the ride as “amazingly smooth.”  Here’s how Jaffe described part of the trip.

“We go through a yellow light, the car having calculated in a fraction of a second that stopping would have been more dangerous. We push past a nearby car waiting to merge into our lane, because our vehicle’s computer knows we have the right-of-way. We change into the right lane for a seemingly pointless reason until, a minute later, the car signals a right turn. We go the exact speed limit because maps the car consults tell it this road’s exact speed limit. The car identifies orange cones in the shoulder and we drift laterally in our lane, to give any road workers more space.

Between you and me: amazingly smooth.”

For Google’s engineers, a safe and boring trip is now expected. Google’s twenty-four self-driving cars have driven over 700,000 miles since 2009 with only two minor accidents, both due to driver error. The car has been put through its paces on the highway Google decided to start testing on unpredictable city and suburban streets. Twice during Jaffe’s ride along Dolgov had to take manual control of the car, a relatively high number for a test drive and a source of frustration for the engineers. The fact that the car was completely on its own the rest of the time however illustrates just how far this project has come. Two years ago, the Google car couldn’t handle half the situations it could now and the situations it still can’t handle are detected fast enough for the car to stop safely or the driver to take over, a clear illustration of Google’s “safety first” attitude. But what makes Google’s driverless car driverless? What systems are responsible for this outstanding achievement?

Google Tech 2

The first part of the Google Car technology is the LIDAR system (Laser Imaging Detection and Ranging). This system is essentially a laser radar composed of a rangefinder and a Velodyne 64 beam laser. It has the appearance of a large gray bucket mounted atop the Google Car.  This “bucket” contains those 64 lasers and spins them around 10 times a second. The rangefinder determines how far away the car is from something by measuring how long it takes for the LIDAR’s lasers to hit an object, bounce off and return to the system. The LIDAR is used in conjunction with pre-loaded maps of the test area constructed in painstaking detail by Google’s engineers using Google Maps and Street View. These maps cannot account for mobile objects like LIDAR can but give the positions of all static objects along the current test drive and that gives the car a good idea of what to expect. Taken together, the LIDAR and maps let the car see like this:

Google Tech 3

In the picture above the car encounters a set of traffic cones during Jaffe’s ride along. The car wasn’t able to navigate past the roadwork but it saw the obstacle and came to a safe stop before hitting it. All of this footage is recorded on a laptop that the Google engineer riding with the backup driver has with a comment box to record anything interesting or to flag any major issues.

Other important components of the Google Car include GPS, cameras and radar. GPS (Global Positioning System) devices, such as portable units and those in smartphones, communicate with dedicated satellites in orbit to determine their location. It can deliver a driver within several meters of their intended destination, enough for their eyes and ears to take over. The Google Car has LIDAR, radar and cameras for its eyes and ears but GPS gives it a big picture. This is essential for long term navigation.

LIDAR can give the shape of all objects around the car but short range digital cameras allow the car to read things, like road signs. Radar has a similar job as the cameras and LIDAR but at much longer range, up to 160 meters in any direction to be exact. Radio waves are emitted from the radar, hit a solid object and bounce back to give an idea of how far the object is, how big it is and where it’s likely to move. This preps the car for non-static obstacles its camera and LIDAR can’t see yet. The Google Car’s software is sophisticated enough to use all of these tools to create a safe, smooth ride and Jaffe describes later in his article how the car performed.

“The car then passed a few more staged tests. We slowed for a group of jaywalkers and a rogue car turning in front of us from out of nowhere. We stopped at a construction worker holding a temporary STOP sign and proceeded when he flipped it to SLOW — proof the car can read and respond to dynamic surroundings, making it less reliant on pre-programmed maps. We merged away from a lane blocked by cones not unlike the one that stumped us earlier.”

We hope this article gives you a good overview of what this amazing technology can do and how far along it is. In the coming blogs, we plan to go into much greater detail about the individual systems, their history of development and how they work.

The Future Calling: GM on Cell Phone Integration for Advanced Auto Safety

We’ve seen lots of articles and presentation on how DSRC technology is the foundation for safety applications in Connected Vehicles.  Should other communications technologies be used as well?  If so, when would one technology make sense over the other?

Donald Grimm, Senior Researcher at GM, outlines the benefits of integrating cell phones to the safety network and the opportunities they present, in an exclusive podcast with Telematics Update.

Click here to listen to the exclusive discussion.

Clifford Nass Automated Vehicle Simulator

We’ve talked a lot about how the Silicon Valley is one of the main hubs for automotive technology, with facilities such as Google and the Stanford Automotive Research Institute headquartered here.

As a student at Stanford, and specifically as a Psychology major, I have been lucky enough to experience research in automation first-hand. In many of my undergrad psych classes, it is often required for students to participate in graduate students’ research as test subjects. In my sophomore year I was fortunate enough to serve as a participant in a study on automated vehicles.

The study took place in the Clifford Nass Communication between Humans and Interactive Media (CHIMe) Lab. While Professor Nass unfortunately passed away last year, his research and contributions to the field of automation was groundbreaking.

Clifford Nass

I came into the study unaware of what they were testing (some form of deception is often involved in many psychology studies). It was remarkable to see when I got there, however, that the researchers had created an actual car for me to sit in. It was a pretty nice car, too. I think they got parts from a real Cadillac. The car had a driver and passenger seat, complete dashboard, pedals, wheels, and even cup holders. (Note: I bought Shawn a car last year as an early graduation gift.  Cup holders were an important consideration for Shawn - John)

Stanford Simulator

While I sat in the car, there was a large screen in front of me. As I was driving, a virtual road was projected in front of me. The car reacted to my touch, including turning when I turned the wheel, accelerating when I pushed the gas pedal, and stopping when I hit the brakes.

Of utmost importance to our work here, however, was that the simulator asked me to put the car on autopilot. The screen would instruct me to put it into autopilot for a set period of time. I would then have the option to keep driving myself or allow the car to drive on its own. I found out later that they were timing how much I kept the car on autopilot and testing how comfortable I felt with allowing the car to drive on its own.

Much to my own surprise at that time (this was before I knew anything about automated vehicles), I was very inclined to allow the car to drive on its own. I was especially inclined to allow it to drive on its own when the car was in the city or in an area with heavy traffic, where driving became more difficult.

It’s funny to think (especially considering I’m a Senior now and will be graduating soon) about how things really come full circle. I had no idea when I was getting into that car how much that study, which I just stumbled across in order to fulfill my credit for my class, would have an impact on the work we’re doing today.

Click here to see a video of the Stanford Autonomous Driving Simulator in action.

Shawn

System Testing

Software and system testing have been big news lately.  As is typical, software and system testing only becomes news when it isn’t done properly and when things don’t go well.  I think even the President would agree that inadequate testing caused many of the issues with the rollout of Healthcare.gov.

While the errors from this failing website have been frustrating to users and a headache for the Obama administration, they didn’t cause people to die.  Errors in Driverless Transportation systems, however, will have dire consequences and the potential to kill people.  Therefore it is crucial to thoroughly and continuously test these systems.

Bernie Gauf, President of the automated software testing firm IDT and co-author of the book Implementing Automated Software Testing writes:

“One of the real keys to software testing in general and system testing in particular is to get ahead of the process and to ensure that how solutions will be tested is planned in from the beginning”.

We at Driveless Transportation couldn’t agree more.  We see that testing will be required at multiple levels as the technology moves from science experiment to mainstream, main-street technology.  This will not only include the on road tests done by various manufacturers and technology providers such as Google, Nissan and Mercedes-Benz, but also a series of tests that must be performed at a variety of levels.

The first level is testing the many components and technologies that individually make up the various technology sub-systems.  As new versions of these products are available, how do we ensure that they will continue to perform in the same way that previous versions have?

Once the components have been tested, sub-systems within each vehicle must be tested.  This needs to include testing not just the new models and releases of these sub-systems but also testing individual vehicles both as they roll off the assembly line and down the road as they age.  Software isn’t like fine wine -  it doesn’t age well.

Finally, the most complex area requiring testing is how vehicles connect both to each other and to the road, the area where the most testing is required and where the testing is most complicated.  This complexity is because the situations that the cars could encounter and the interactions between them are almost unbounded.  This will be a significant challenge given an ever-changing number of systems and integration points.

In the coming weeks, we’ll look at what it takes to get a car (and driver) tested today and then investigate each of the areas above in more detail.  What are we missing?  Give us your thoughts below.

Breaking The Rules

I am very fortunate to be the father of three wonderful daughters.  They are mostly grown now: the oldest two are in college and the youngest is a senior in high school.  When they were younger, however, one of the challenges I remember was teaching them when it was okay, and in fact even necessary, to break the rules.

Some of these were quite trivial, like cutting under the ropes to avoid having to walk very far (as long as you are not cutting ahead of others in line); some were more significant, like breaking driving rules when rushing to the hospital in an emergency.

As driverless technologies emerge, the engineers programming these vehicles may have to confront these same issues. In his article in The Atlantic, entitled The Ethics of Autonomous Cars, Patrick Lin writes, “Sometimes good judgment can compel us to act illegally. Should a self-driving vehicle get to make that same decision?”

I think, realistically, it has to.

Check out the article and let us know what you think.

John