WHEN KANSAS CITY, Missouri, real estate appraiser Dave Markus learned he was one of about 80 million people whose personal data was exposed in the Anthem health insurance breach discovered at the end of January, he immediately signed up for an identity protection service. But then he got a letter from the IRS in March. “Basically they said, ‘Before we send you your return, we want you to get ahold of us and verify your identity for your 2014 return.’ But we hadn’t filed yet.”
His wife spent time in “IRS voicemail hell.” She, and later he, went to the local office, where they were told to file a paper return, including photocopies of their drivers licenses and social security cards. It’s not clear whether the Anthem breach was even related to Markus’s tax woes. As in many cases of identity theft, which data breach was the cause will never be known. And the local IRS officials seemed confounded as to who should get involved. “They wanted me to file a police report with St. Louis,” Markus recalls. “And I said ‘Why?’ Are they going to fly to Shanghai or St. Petersburg or wherever these guys are?’”
Markus’s predicament is increasingly common. Just this week, news broke that criminals penetrated the IRS to pilfer nearly $50 million in refunds that belonged to more than 100,000 taxpayers. The agency claimed the perpetrators had seized data from other breaches. “These are extremely sophisticated criminals with access to a tremendous amount of data,” said an IRS spokesperson on Tuesday. The tax breach has drawn fire from critics who say that the government should have done a better job protecting citizens’ data. For its part, the IRS reiterates that it’s been facing cyberthreats in an environment of budget cuts, which since 2010 have cut 10,000 jobs from its enforcement staff.
In the past two years alone, well over 100 million people have gotten a letter, call or email notifying that they’ve been victims of a data breach. In some cases, key personal data has been exposed by retailers, including Target; for others, it’s health insurance companies including Anthem, or employers like Sony. Companies are paying a financial price, but in the end it may be relatively modest, and not enough to encourage better data protection practices. For example, Target has incurred more than $250 million in breach-related expenses, according to SEC filings, but only a fraction of that has been committed to consumers. The company agreed to a proposed class-action settlement for $10 million in March, which specified up to $10,000 for each person who could prove they had suffered clear damages and the rest to be split among other victims of the breach, which, if spread across all possible class members, would be less than a dollar a person. Benjamin Dean, a fellow for cybersecurity and Internet governance at Columbia University’s SIPA, wrote, “When we subtract insurance reimbursement, the losses fall to $162 million. If we subtract tax deductions (yes, breach-related expenses are deductible), the net losses tally $105 million. This is the equivalent of 0.1% of 2014 sales.”
In the meantime, both courts and consumers are faced with a quandary. Data leaked now could be used a decade or more hence, and both courts and individuals are left calculating probable future risk as well as current exposure. Consumers face limited options for protecting themselves. A telling measure of frustration is a syndrome that has been termed “data breach fatigue.” A third of people notified about a breach don’t take any action at all, according to a 2014 study by the Ponemon Institute.
In our age of big data, individuals are extremely vulnerable to breaches, but both government and corporations have an interest in collecting as much personal information as possible, from our shopping habits to our cellphone metadata. Columbia’s Benjamin Dean said he questions whether “opaque information sharing arrangements between companies and intelligence agencies” undermine the government’s incentive to advocate for consumer privacy protections. For example, the federal government can not only ask for data about consumers using platforms like Facebook and Amazon, but also require those companies to shroud the details of how many national security-related requests were made. That’s in addition to revelations that the government hacked the world’s largest SIM card manufacturer, giving intelligence agencies the capacity to access a large portion of the world’s cell phone users’ communications.
To be fair, the government has taken some of the right steps. The variety of federal agencies that deal with cybercrime in one form or the other is staggering — including the FBI, IRS and Department of Homeland Security. Some are more hands-on with consumers than others. The IRS said it stopped 19 million suspicious returns between 2011 and October 2014. Notwithstanding the recent breach, the agency says it prevented almost 3 million suspicious returns this year. Tax identity theft victims like Dave Markus of Kansas City are offered the chance to get an IRS Identity Protection PIN — a six-digit code that tax filers can use along with their social security number. So far, 1.5 million people affected by tax identity theft have signed up for the IP PIN program, which allows them to file more securely.
But as Lee Tien, a senior staff attorney for the Electronic Frontier Foundation, says, “We’ve always known that [federal] entities have internally conflicting missions. On the one hand they do enforce privacy laws and secure networks. But when they go after bad guys, their job is to infiltrate. They are dual-hatted.”
THE LEGAL PROCESS around big data breaches typically unfolds in a now-familiar pattern: victims claim that companies could have been more secure; the companies argue most or all of the people exposed haven’t been harmed; both parties settle out of court. Litigation against Sony, which, according to the U.S. government, was targeted by hackers in North Korea over the movie The Interview, is ongoing. The plaintiffs in one class-action suit describe their exposure as “an epic nightmare,” saying that Sony “failed to secure its computer systems, servers and databases, despite weaknesses that it has known about for years.” The data revealed included mortifying in-house memos between studio executives (including Amy Pascal, who was forced out as chair), but more importantly for employees at large, 47,000 social security numbers, plus medical and salary records. Sony, which refused to comment for this article, said in a filing to dismiss, “There are no allegations of identity theft, no allegations of fraudulent charges, and no allegations of misappropriation of medical information.” Instead, the plaintiffs assert a broad range of common-law and statutory causes of action based on their alleged fear of an increased risk of future harm, as well as expenses they claim to have incurred to prevent that future harm.
Anthem, the healthcare company whose breach affected 80 million, declined an interview request but emailed a statement which read in part, “To date, in working with the FBI, we have found no evidence that the cyber attackers have shared or sold any of our members’ data and there is no evidence that fraud has occurred against our members, including fraudulent tax returns.” An FBI spokesperson confirmed the bureau hadn’t, so far, found that attackers sold or shared data. But it wouldn’t weigh in on the assertion that no fraud had occurred.
During a Securities and Exchange Commission roundtable last year, attorney Douglas Meal raised a troubling possibility: What if companies just didn’t disclose data breaches? Meal, who consulted with Target — whose 2013 data breach exposed information from 40 million credit cards and data from approximately 70 million shoppers — told the SEC, “I think, just to be someone speaking from the trenches … there is a tremendous disincentive to disclose a breach,” adding the qualifiers, “if the breach isn’t otherwise going to become public” and “if a company can conclude that it doesn’t otherwise have a disclosure obligation.” In his words, once a breach is disclosed, “you are now going to be a target of a lot of class-action plaintiffs, of consumer protection regulators, who will not look at you as the victim of the breach … but will look at you as almost the perpetrator.” Target reacted by saying they favored prompt disclosure — but Meal had spelled out logic that could well appeal to other firms facing a similar predicament. And of course, we’d never know.
The EFF’s Tien says that keeping data breaches secret from consumers was a common corporate strategy until state regulators began to demand disclosure. (All but three states now have disclosure laws.) And there’s new legislation pending in Congress, including HR 1770, the Data Security and Breach Notification Act of 2015, that would require consumer notification in all states and the District of Columbia. Yet some lawmakers point out the bill actually weakens existing state-level provisions. Rep. Jan Schakowsky, D-Ill., stated that the bill would “weaken existing state law in 38 states,” and in some cases, “this bill would prevent you from being notified about breaches for which your state currently requires notification.” The pending legislation also leaves out the stickier question of what data privacy practices should be in place to prevent breaches from happening, or appropriate legal liability and penalties when breaches occur. “The current legal system has a short circuit because it doesn’t give companies very much incentive to address this,” says Tien.
ARE THERE PATHS to better data security for citizens and customers? Right now, several countries including the U.S., the U.K. and Australia that are at the crossroads of commerce, travel and immigration have aggressively pursued access to citizen data. On the other hand, Columbia’s Benjamin Dean applauds the X-Road system of the relatively tiny Estonia (population 1.3 million) for allowing “secure and confidential sharing of information.” X-Road is designed so that data can be securely verified across government agencies without that information being held in a central repository. For example, a citizen can link bank data to a national healthcare system, or quickly validate his or her ID at the border. While some other European nations are experimenting with the X-Road platform, a country like the United States would potentially have to submit to limitations on its direct access to citizen information in order to participate or duplicate this type of effort.
Meanwhile, individuals are triaging data notices and personal concerns. Rochelle and Paul (last names withheld) found out they were compromised in the Anthem breach at the same time that they were moving Rochelle’s father into assisted living. Two weeks later, she says, “someone’s opened a Paypal Credit account in Paul’s name and charged $682 from a place called Modern Coin.” They called Paypal Credit to close the account, followed up with Anthem, and, via Anthem, got 24 months of credit monitoring from AllClear ID. “But I failed to jump into action immediately. We gave this asshole, whoever opened up the Paypal Credit, the opportunity to do that,” Rochelle says.
While it would be easy to blame consumers — saying they should monitor their information more closely — the problem of data theft is endemic, and frustration is justified. The EFF’s Tien says, “Back in the day we’d be asked, ‘What are the 10 things a consumer can do to protect themselves?’ I hate to be a gloomy Gus, but the message I give journalists and others is there’s basically nothing you can do. It’s like saying, what can you do about climate change by yourself … when the problem is structural architecture and the flow around your data.” (The EFF does offer individuals Privacy Badger, a tool that blocks third parties from tracking which sites you visit as you surf the Internet.) Politicians, Tien notes, including the first successful data miner in chief, President Obama, have “very mixed incentives about stomping on this area.”
Photo: A copy of an email sent to Anthem Inc. plan members with notification of a cyberattack. (Andrew Harrer/Bloomberg/Getty Images)