For Your Eyes Only

 

For Your Eyes Only

Everyone has a unique pattern of eye movements. A new biometric security system exploits this for a simple, hard-to-fool approach. Wednesday, November 10, 2010 – By Duncan Graham-Rowe.

 

The way you view the world is unique, so why not use it to identify you?

 

A company in Israel has developed a security system that does just this–exploiting a person's unique pattern of eye movements to identify them. Most biometric security systems measure physical features that are constant, such as fingerprints or iris patterns. An eye-tracking system has the potential to be harder to fool and easier use, its creators say.

The new system tracks the way a person's eye moves as he watches an icon roam around a computer screen. The way the icon moves can be different every time, but the user's eye movements include "kinetic features"—slight variations in trajectory—that are unique, making it possible to identify him. This is less complicated than using a long pass phrase or a smart card to gain access to a computer system or a building.

"The interface is really very simple," says Daphna Palti-Wasserman, CEO of ID-U Biometrics, the company that developed the technology. "The user watches a target moving on a screen and a camera monitors their eye movement responses."

 

 

Eye tracking also requires no specialist hardware, other than a camera and a display, so it is cheaper and easier to deploy, Palti-Wasserman says. Using a standard video camera, the system can identify users with an accuracy of 97 percent, she says. Many cell phones and laptops already have this kind of hardware, so ID-U's system could be deployed widely for both desktop and mobile computing. The company is currently working on an app for the iPhone 4.

Other biometric systems can be fooled by a very accurate copy of, for example, a fingerprint or retina. This new system uses a biometric pattern that's very hard to copy, Palti-Wasserman says. "What we're doing is a challenge-response sequence," she says. "The whole process depends on what is being shown on the screen," she says.

Kevin Bowyer, a biometrics expert at the University of Notre Dame, notes that some voice, keystroke, and handwriting-based security systems already use a challenge-response approach. "The main advantage is to control the situation more in order to get better information and improve the confidence in your results," he says.

ID-U's system moves an icon across a screen in a way that elicits about a dozen distinct characteristics as the viewer's eyes move. These tiny movements are sampled 30 times a second. "When the target jumps, it activates specific mechanisms of eye movement, but when it moves smoothly or slowly, different mechanisms are activated," says Palti-Wasserman. ID-U will not reveal precisely which metrics its system uses, but Palti-Wasserman says it is analogous to watching the trajectories of two different people driving around the same track—they follow the same route, but there will be distinct differences.

Bowyer says the biometric system will need to be tested on hundreds of different people, with significant time intervals between tests, to prove it is reliable. "There are lots of cases of people getting great performance numbers when they bring a set of people in and take several images of them in one session," he says. "But then if they took the same number of images at weekly or monthly intervals, the performance drops drastically."

 

Crowdsourcing Surveillance

 

Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a "Viewer." Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores.  If they notice someone shoplifting, they can alert the store owner. Viewers get rated on their ability to differentiate real shoplifting from false alarms, can win 1000 pounds if they detect the most shoplifting in some time interval, and otherwise get paid a wage that most likely won't cover their initial fee.

Although the system has some nod towards privacy, groups like Privacy International oppose the system for fostering a culture of citizen spies. More fundamentally, though, I don't think the system will work.

Internet Eyes is primarily relying on voyeurism to compensate its Viewers. But most of what goes on in a retail store is incredibly boring. Some of it is actually voyeuristic, and very little of it is criminal. The incentives just aren't there for Viewers to do more than peek, and there's no obvious way to discouraging them from siding with the shoplifter and just watch the scenario unfold.

 

This isn't the first time groups have tried to crowdsource surveillance camera monitoring.  Texas's Virtual Border Patrol tried the same thing:

deputizing the general public to monitor the Texas-Mexico border.  It ran out of money last year, and was widely criticized as a joke.

 

This system suffered the same problems as Internet Eyes — not enough incentive to do a good job, boredom because crime is the rare exception

— as well as the fact that false alarms were very expensive to deal with.

 

Both of these systems remind me of the one time this idea was conceptualized correctly.  Invented in 2003 by my friend and colleague Jay Walker, US HomeGuard also tried to crowdsource surveillance camera monitoring. But this system focused on one very specific security concern: people in no-mans areas. These are areas between fences at nuclear power plants or oil refineries, border zones, areas around dams and reservoirs, and so on: areas where there should never be anyone.

 

The idea is that people would register to become "spotters." They would get paid a decent wage (that and patriotism was the incentive), receive a stream of still photos, and be asked a very simple question: "Is there a person or a vehicle in this picture?"  If a spotter clicked "yes," the photo — and the camera — would be referred to whatever professional response the camera owner had set up.

 

HomeGuard would monitor the monitors in two ways. One, by sending stored, known, photos to people regularly to verify that they were paying attention. And two, by sending live photos to multiple spotters and correlating the results, to many more monitors if a spotter claimed to have spotted a person or vehicle.

 

Just knowing that there's a person or a vehicle in a no-mans-area is only the first step in a useful response, and HomeGuard envisioned a bunch of enhancements to the rest of that system.  Flagged photos could be sent to the digital phones of patrolling guards, cameras could be controlled remotely by those guards, and speakers in the cameras could issue warnings.  Remote citizen spotters were only useful for that first step, looking for a person or a vehicle in a photo that shouldn't contain any. Only real guards at the site itself could tell an intruder from the occasional maintenance person.

 

Of course the system isn't perfect. A would-be infiltrator could sneak past the spotters by holding a bush in front of him, or disguising himself as a vending machine.  But it does fill in a gap in what fully automated systems can do, at least until image processing and artificial intelligence get significantly better.

 

HomeGuard never got off the ground. There was never any good data about whether spotters were more effective than motion sensors as a first level of defense. But more importantly, Walker says that the politics surrounding homeland security money post-9/11 was just too great to penetrate, and that as an outsider he couldn't get his ideas heard.

Today, probably, the patriotic fervor that gripped so many people

post-9/11 has dampened, and he'd probably have to pay his spotters more than he envisioned seven years ago. Still, I thought it was a clever idea then and I still think it's a clever idea — and it's an example of how to do surveillance crowdsourcing correctly.

 

Making the system more general runs into all sorts of problems. An amateur can spot a person or vehicle pretty easily, but is much harder pressed to notice a shoplifter. The privacy implications of showing random people pictures of no-man's-lands is minimal, while a busy store is another matter — stores have enough individuality to be identifiable, as do people. Public photo tagging will even allow the process to be automated. And, of course, the normalization of a spy-on-your-neighbor surveillance society where it's perfectly reasonable to watch each other on cameras just in case one of us does something wrong.

 

 

Full Body Scanners: What's Next?

 

 

 

Organizers of National Opt Out Day, the Wednesday before Thanksgiving

when air travelers were urged to opt out of the full-body scanners at

security checkpoints and instead submit to full-body patdowns — were

outfoxed by the TSA. The government pre-empted the protest by turning

off the machines in most airports during the Thanksgiving weekend.

Everyone went through the metal detectors, just as before.

 

Now that Thanksgiving is over, the machines are back on and the

"enhanced" pat-downs have resumed. I suspect that more people would

prefer to have naked images of themselves seen by TSA agents in another

room, than have themselves intimately touched by a TSA agent right in

front of them.

 

But now, the TSA is in a bind. Regardless of whatever lobbying came

before, or whatever former DHS officials had a financial interest in

these scanners, the TSA has spent billions on those scanners, claiming

they're essential. But because people can opt out, the alternate manual

method must be equally effective; otherwise, the terrorists could just

opt out. If they make the pat-downs less invasive, it would be the same

as admitting the scanners aren't essential. Senior officials would get

fired over that.

 

So not counting inconsequential modifications to demonstrate they're

"listening," the pat-downs will continue. And they'll continue for

everyone: children, abuse survivors, rape survivors, urostomy bag

wearers, people in wheelchairs. It has to be that way; otherwise, the

terrorists could simply adapt. They'd hide their explosives on their

children or in their urostomy bags. They'd recruit rape survivors, abuse

survivors, or seniors. They'd dress as pilots. They'd sneak their PETN

through airport security using the very type of person who isn't being

screened.

 

And PETN is what the TSA is looking for these days. That's

pentaerythritol tetranitrate, the plastic explosive that both the Shoe

Bomber and the Underwear Bomber attempted but failed to detonate. It's

what was mailed from Yemen. It's in Iraq and Afghanistan. Guns and

traditional bombs are passé; PETN is the terrorist tool of the future.

 

The problem is that no scanners or puffers can detect PETN; only swabs

and dogs work. What the TSA hopes is that they will detect the bulge if

someone is hiding a wad of it on their person. But they won't catch PETN

hidden in a body cavity. That doesn't have to be as gross as you're

imagining; you can hide PETN in your mouth. A terrorist can go through

the scanners a dozen times with bits in his mouth each time, and

assemble a bigger bomb on the other side. Or he can roll it thin enough

to be part of a garment, and sneak it through that way. These tricks

aren't new. In the days after the Underwear Bomber was stopped, a

scanner manufacturer admitted that the machines might not have caught him.

 

So what's next? Strip searches? Body cavity searches? TSA Administrator

John Pistole said there would be no body cavity searches for now, but

his reasons make no sense. He said that the case widely reported as

being a body cavity bomb might not actually have been. While that

appears to be true, what does that have to do with future bombs? He also

said that even body cavity bombs would need "external initiators" that

the TSA would be able to detect.

 

Do you think for a minute that the TSA can detect these external

initiators? Do you think that if a terrorist took a laptop — or better

yet, a less-common piece of electronics gear — and removed the insides

and replaced them with a timer, a pressure sensor, a simple contact

switch, or a radio frequency switch, the TSA guy behind the X-ray

machine monitor would detect it? How about if those components were

distributed over a few trips through airport security? On the other

hand, if we believe the TSA can magically detect these external

initiators so effectively that they make body-cavity searches

unnecessary, why do we need the full-body scanners?

 

Either PETN is a danger that must be searched for, or it isn't. Pistole

was being either ignorant or evasive.

 

Once again, the TSA is covering their own asses by implementing

security-theater measures to prevent the previous attack while ignoring

any threats of future attacks. It's the same thinking that caused them

to ban box cutters after 9/11, screen shoes after Richard Reid, limit

liquids after that London gang, and — I kid you not — ban printer

cartridges over 16 ounces after they were used to house package bombs

from Yemen. They act like the terrorists are incapable of thinking

creatively, while the terrorists repeatedly demonstrate that can always

come up with a new approach that circumvents the old measures.

 

On the plus side, PETN is very hard to get to explode. The pre-9/11

screening procedures, looking for obvious guns and bombs, forced the

terrorists to build inefficient fusing mechanisms. We saw this when

Abdulmutallab, the Underwear Bomber, used bottles of liquid and a

syringe and 20 minutes in the bathroom to assemble his device, then set

his pants on fire — and still failed to ignite his PETN-filled

underwear. And when he failed, the passengers quickly subdued him.

 

The truth is that exactly two things have made air travel safer since

9/11: reinforcing cockpit doors and convincing passengers they need to

fight back. The TSA should continue to screen checked luggage. They

should start screening airport workers. And then they should return

airport security to pre-9/11 levels and let the rest of their budget be

used for better purposes. Investigation and intelligence is how we're

going to prevent terrorism, on airplanes and elsewhere. It's how we

caught the liquid bombers. It's how we found the Yemeni

printer-cartridge bombs. And it's our best chance at stopping the next

serious plot.

 

Because if a group of well-planned and well-funded terrorist plotters

makes it to the airport, the chance is pretty low that those

blue-shirted crotch-groping water-bottle-confiscating TSA agents are

going to catch them. The agents are trying to do a good job, but the

deck is so stacked against them that their job is impossible. Airport

security is the last line of defense, and it's not a very good one.

 

We have a job here, too, and it's to be indomitable in the face of

terrorism. The goal of terrorism is to terrorize us: to make us afraid,

and make our government do exactly what the TSA is doing. When we react

out of fear, the terrorists succeed even when their plots fail. But if

we carry on as before, the terrorists fail — even when their plots succeed.

 

– Who do You Call:  Mass Notification?

Next month, the new building code for mass notification systems

(a.k. a. Emergency Communications Systems, ECS) become effective.  Fire Marshals and related authorities will have the authority to compel all new construction, remodels and any other facility they choose to comply with the new code.  The code can be found in NFPA 72, 2010 and the California Building Code 2010. 

 

Ollivier has conferred with over a dozen such authorities about this code and believes that security systems companies will bear the major responsibility for the required risk assessment, system design and system installation.  More about mass notification systems is discussed below.