Connect with us

Syndication

Why driverless car tech is ruining your driving skills

Published

on

Surveys have shown consumers are fond of semi-autonomous features because they take the stress out of stop-and-go traffic and alleviate the monotony of long trips. But the freedom afforded by the new aids has invited abuse by drivers who treat the technology as if it’s fully capable of taking control, with little or no human input necessary. YouTube videos have emerged showing daredevil drivers hopping in the back seat as they trick the technology to believe they have hands on the wheel.

A federal investigation into the fatality last year in a Tesla Model S traveling in semi-autonomous Autopilot mode showed the driver had his hands on the wheel for just 25 seconds in the final 37 minutes before crashing into a semi. Tesla, which was cleared of responsibility by safety regulators, has modified Autopilot to require more driver input.

“At a very basic level, consumers don’t have any idea of how these systems work because they’re all named something different and they all function differently,” said Greg Brannon, director of automotive engineering and industry relations at the American Automobile Association.

Although AAA is urging automakers and regulators to come up with standard terms and parameters for semi-autonomous features, that conflicts with automakers desire to develop and market unique systems and seek an edge over competitors.

Some manufacturers are pushing the boundaries of safety to make their cars appear more advanced, Wakefield said, by fielding systems that allow drivers to keep their hands off the wheel for too long before a chime and dashboard light remind them to take hold again.

“The idea that you can take your hands off the wheel for 15 seconds and the driver is still in control, that’s not realistic,” said Lund, of IIHS. “If they’re taking their hands off for 15 seconds, then they’re doing some other things.”

Owner’s manuals

It’s difficult for drivers to understand what driver-assist systems can and can’t do because automakers sometimes send mixed signals. The seldom-read owner’s manual takes a cautious approach to explaining the aids because corporate lawyers water down wording to avoid exposing automakers to legal liability, AAA’s Brannon said.

Another risk, Lund warns, is that drivers become so accustomed to the aids that they forget when getting into older vehicles or rental cars that aren’t equipped with the technology.

Even if a driver hops into an unfamiliar car that is equipped with a system, performance varies widely by brand. Some adaptive cruise controls, for example, can bring a car to a full stop during low-speed driving, but not at highway speeds.

“So a driver may become accustomed to it working in town, but not realize that above speeds of 50 miles per hour, it’s not going to bring the vehicle to a stop,” Brannon said. “And that could end badly.”

This post was originally published by Bloomberg | Quint

Readmore

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

DNA techniques could transform facial recognition technology

Published

on

When police in London recently trialled a new facial recognition system, they made a worrying and embarrassing mistake. At the Notting Hill Carnival, the technology made roughly 35 false matches between known suspects and members of the crowd, with one person “erroneously” arrested.

Camera-based visual surveillance systems were supposed to deliver a safer and more secure society. But despite decades of development, they are generally not able to handle real-life situations. During the 2011 London riots, for example, facial recognition software contributed to just one arrest out of the 4,962 that took place.

The failure of this technology means visual surveillance still relies mainly on people sitting in dark rooms watching hours of camera footage, which is totally inadequate to protect people in a city. But recent research suggests video analysis software could be dramatically improved thanks to software advances made in a completely different field: DNA sequence analysis. By treating video as a scene that evolves in the same way DNA does, these software tools and techniques could transform automated visual surveillance.

Since the Metropolitan Police installed the first CCTV cameras in London in 1960, up to 6m of them have now been deployed in the UK. And body-worn cameras are now being issued to frontline officers, creating not only even more video footage to analyse, but also more complex data due to constant camera motion.

Yet automated visual surveillance remains mostly limited to tasks in relatively controlled environments. Detecting trespass on a specific property, counting people passing through a given gate, or number-plate recognition can be completed quite accurately. But analysing footage of groups of people or identifying individuals in a public street is unreliable because outdoor scenes vary and change so much.

In order to improve automated video analysis, we need software that can deal with this variability rather than treating it as an inconvenience – a fundamental change. And one area that is used to dealing with large amounts of very variable data is genomics.

Finding faces in the crowd.
Shutterstock

Since the three billion DNA characters of the first human genome (the entire set of genetic data in a human) were sequenced in 2001, the production of this kind of genomic data has increased at an exponential rate. The sheer amount of this data and the degree to which it can vary means vast amounts of money and resources have been needed to develop specialised software and computing facilities to handle it.

Today it’s possible for scientists to relatively easily access genome analysis services to study all sorts of things, from how to combat diseases and design personalised medical services, to the mysteries of human history.

Genomic analysis includes the study of the evolution of genes over time by investigating the mutations which have occurred. This is surprisingly similar to the challenge in visual surveillance, which relies on interpreting the evolution of a scene over time to detect and track moving pedestrians. By treating differences between the images that make up a video as mutations, we can apply the techniques developed for genomic analysis to video.

Early tests of this “vide-omics” principle have already demonstrated its potential. My research group at Kingston University has, for the first time, shown that videos could be analysed even when captured by a freely moving camera. By identifying camera motion as mutations, they can be compensated so that a scene appears as if filmed by a fixed camera.

Meanwhile, researchers at the University of Verona have demonstrated that image processing tasks can be encoded in such a way that standard genomics tools could be exploited. This is particularly important since such an approach reduces significantly the cost and time of software development.

The ConversationCombining this with our strategy could eventually deliver the visual surveillance revolution that was promised many years ago. If the “vide-omics” principle were to be adopted, the coming decade could deliver much smarter cameras. In which case, we had better get used to being spotted on video far more often.

This article was originally published on The Conversation. Read the original article.

Readmore

Continue Reading

Syndication

Driverless cars bring visions of building boom, suburban sprawl

Published

on

Still, Saiz is optimistic that self-driving cars will unlock buildable space. Developers are already starting to target parking structures, gas stations and auto dealerships, betting that they’ll be able to redevelop the sites as car ownership becomes obsolete, said Rick Palacios, director of research at John Burns Real Estate Consulting.
Readmore

Continue Reading

Insider

Computers will soon be able to fix themselves – will that kill IT departments?

Published

on

Robots and AI are replacing workers at an alarming rate, from simple manual tasks to making complex legal decisions and medical diagnoses. But the AI itself, and indeed most software, is still largely programmed by humans.

Yet there are signs that this might be changing. Several programming tools are emerging which help to automate software testing, one of which we have been developing ourselves. The prospects look exciting; but it raises questions about how far this will encroach on the profession. Could we be looking at a world of Terminator-like software writers who consign their human counterparts to the dole queue?

We computer programmers devote an unholy amount of time to testing software and fixing bugs. It’s costly, time consuming and fiddly – yet it’s vital if you want to bring high quality software to market.

Testing, testing …

A common method of testing software involves running a program, asking it to do certain things and seeing how it copes. Known as dynamic analysis, many tools exist to help with this process, usually throwing thousands of random choices at a program and checking all the responses.

Facebook recently unveiled a tool called Sapienz that is a big leap forward in this area. Originally developed by University College London, Sapienz is able to identify bugs in Android software via automated tests that are far more efficient than the competition – requiring between 100 and 150 choices by the user compared to a norm of nearer 15,000.

Bug on out.
Phichak

The difference is that Sapienz contains an evolutionary algorithm that learns from the software’s responses to previous choices. It then makes new choices that aim to find the maximum number of glitches and test the maximum number of kinds of choices, doing everything as efficiently as possible.

It may soon have competition from DiffBlue, a spin-out from the University of Oxford. Based on an AI engine designed to analyse and understand what a program is doing, the company is developing several automated tools to help programmers. One will find bugs and write software tests; another will find weaknesses that could be exploited by hackers; a third will make improvements to code that could be better expressed or is out of date. DiffBlue recently raised US$22m in investment funding, and claims to be delivering these tools to numerous blue chip companies.

The tool that we have developed is dedicated to bug hunting. Software bugs are often just an innocent slip of the finger, like writing a “+” instead of a “-”; not so different to typos in a Word document. Or they can be because computer scientists like to count differently, starting at zero instead of the number one. This can lead to so-called “off by one” errors.

Here he is!
Fort Greene Focus, CC BY-SA

You find these annoying little glitches by making one small change after another – repeatedly testing and tweaking until you make the right one. The answer is often staring you in the face – a bit like the game “Where’s Wally?” (or Waldo if you’re in North America). After hours of trying, you finally get that a-ha moment and wonder why you didn’t spot it sooner.

Our tool works as follows: office workers go about their normal administrative duties in the daytime and report any bugs in software as they find them. Overnight, when everyone is logged off, the system enters a “dream-like” state. It makes small changes to the computer code, checking each time to see if the adjustment has fixed the reported problem. Feedback from each run of the code is used to inform which changes would be best to try next time.

We tested it for four months in a Reykjavik organisation with about 200 users. In that time, it reported 22 bugs and all were fixed automatically. Each solution was found on these “night shifts”, meaning that when the programmer arrived at the office in the morning, a list of suggested bug fixes were waiting for them.

The idea is to put the programmer in control and change their job: less routine checking and more time for creativity. It’s roughly comparable to how spell checkers have taken much of the plod out of proof-reading a document. Both tools support the writer, and reduce the amount of time you probably spend swearing at the screen.

We have been able to show that the same system can be applied to other tasks, including making programs run faster and improving the accuracy of software designed to predict things (full disclosure: Saemundur recently co-founded a company to exploit the IP in the system).

Future shock?

It is easy enough to see why programs like these might be useful to software developers, but what about the downside? Will companies be able to downsize their IT requirement? Should programmers start fearing that Theresa May moment, when the automators show up with their P45s?

We think not. While automations likes these raise the possibility of companies cutting back on certain junior programming roles, we believe that introducing automation into software development will allow programmers to become more innovative. They will be able to spend more time developing rather than maintaining, with the potential for endlessly exciting results.

The ConversationCareers in computing will not vanish, but some boring tasks probably will. Programmers, software engineers and coders will have more automatic tools to make their job easier and more efficient. But probably jobs won’t be lost so much as changed. We have little choice but to embrace technology as a society. If we don’t, we’ll simply be left behind by the countries that do.

This article was written together with Alexander Brownlee, Senior Research Assistant, University of Stirling, and John R. Woodward, Lecturer in Computer Science, Queen Mary University of London

This article was originally published on The Conversation. Read the original article.

Readmore

Continue Reading

Subscribe to our Newsletter

Trending