Cyber Vulnerability and National Security
Efforts to define and regulate cyberwars, and ways in which our nation remains vulnerable.
by Paul Rosenzweig
Encryption Is a Cybersecurity Essential and Must Be Defended
The case for encryption.
by Adam K. Levin and J. Anthony Vittal
(The Idiot’s Guide to) Cybersecurity for Lawyers
Simple steps to protect your data.
by Denis Binder
Cyber Vulnerability and National Security
by Paul Rosenzweig
Law and policy, it is said, are made in one of three ways. Typically it is either from the bottom up as developed by experts in the field or from the top down when, say, a governor or president sets a mandate. The third way, of course, is that law and policy sometimes get made in a panic during a time of crisis.
It is only a bit of an exaggeration to say that we are in the midst of a seven-year crisis of cyber vulnerability. To be fair (and accurate) most of the vulnerability occurs at a micro-level. The overwhelming volume of the problem surrounds instances of cyber crime and theft (to the tune of roughly $450 billion each year globally). In any room of 100 people, more than a quarter of them have had some form of cyber theft problem. Of near equal frequency, companies, individuals, and even governments have suffered thefts of information and intellectual property. When the victim is a law firm, we call it a data breach; when it is the government, we call it espionage. And either way it is significant.
But none of those would cause the type of crisis we are seeing in the United States today. Crime is endemic and has been for as long as civilizations have existed. Espionage is so time-honored that it is often waggishly called the second oldest profession in the world. Were it the case that the new cyber domain only enabled more crime and spying, we’d be concerned but we would not panic.
Why then is Washington in such a state of panic? The answer, of course, is that the increasing cyber dependence of critical American infrastructure makes our nation vulnerable to attack in new and fundamentally different ways than it ever has been before. And what is true of American vulnerability is, of course, true of other nations (though, given American cyber dependence it is fair to say that we are differentially more vulnerable than other, less “wired,” nations).
When I first approached this issue of infrastructure vulnerability (as an official at the Department of Homeland Security) back in 2006, the threat was considered theoretical. Some scientists at the Idaho National Lab had done a test (known by the code-name Aurora) and demonstrated that using only computer code they could exploit a vulnerability in the SCADA system of a diesel generator and destroy it. (SCADA is the acronym for a Supervisory Control and Data Acquisition system—think of it as a purpose-built computer system with one job and one job only—to manage a mechanical system like a generator . . . or a dam . . . or an elevator . . . or a traffic light grid . . . and so on. Every computer-operated mechanical system in the world runs on some variant of a SCADA.)
That theoretical vulnerability is now a reality. Consider, by way of example, two recent incidents ripped from the headlines of news within the last six months. According to the New York Times, the Obama Administration had a cyber-attack plan for Iran that it developed as a contingency in case the diplomatic negotiations to limit Iran’s nuclear weapons program failed. David E. Sanger & Mark Mazzetti, U.S. Had Cyber Attack Plan if Iran Nuclear Dispute Led to Conflict, N.Y. Times (Feb. 16, 2016), http://www.nytimes.com/2016/02/17/world/middleeast/us-had-cyberattack-planned-if-iran-nuclear-negotiations-failed.html?_r=0. The plan was code-named Nitro Zeus and, if it had been implemented, was said to be capable of disabling Iran’s air defenses, communications system, and parts of its electric grid. Id. The plan also included an option to introduce a computer worm into the Iranian nuclear facility at Fordo (where uranium was enriched) to disrupt the creation of nuclear weapons. In contemplation of the confrontation, U.S. Cyber Command (our military cyber-operations command) placed electronic implants in Iranian computer networks to “prepare the battlefield.” According to the Times, President Obama saw Nitro Zeus as an option for confronting Iran that was “short of a full-scale war.” Id.
The reports, if true (and, to be fair, they have not been confirmed by any official sources), reflect a growing trend in the use of cyber means to conduct military activity. The United States is not, of course, the only practitioner. One notable example from recent history involves the apparent Russian assault on the transportation and electric grid in Ukraine. Evan Perez, U.S. Official Blames Russia for Power Grid Attack in Ukraine, CNN (Feb. 12, 2016), http://edition.cnn.com/2016/02/11/politics/ukraine-power-grid-attack-russia-us/index.html. That attack, which happened late in 2015, was a “first-of-its-kind” cyber assault that severely disrupted Ukraine’s power system, affecting many innocent Ukrainian civilians. It bears noting that, as I’ve said, the vulnerabilities in Ukraine’s power system are not unique—they exist in power grids across the globe including the U.S. power grid and other major industrial facilities.
This vulnerability is, in many ways, an inevitable consequence of how the cyber network was built. As then-Deputy Secretary of Defense William Lynn summarized in a speech announcing our military Strategy for Operating in Cyberspace: “The Internet was designed to be open, transparent, and interoperable. Security and identity management were secondary objectives in system design. This lower emphasis on security in the Internet’s initial design . . . gives attackers a built-in advantage.” Deputy Secretary of Defense William J. Lynn, III, Remarks on the Department of Defense Cyber Strategy, National Defense University, Washington, D.C., (July 14, 2011), http://archive.defense.gov/speeches/speech.aspx?speechid=1593.
What does all this mean for lawyers? Well, for lawyers who practice military law it raises a host of questions. Do the laws of armed conflict even apply in cyberspace? If they do, what is the cyber equivalent of an “armed attack”? Is it an armed attack, for example, to destroy the data at the Orange County courthouse?
If you think that question is just theoretical nonsense, consider this real life question: Was it an armed attack when the North Koreans stole data from Sony and nearly destroyed the company? One may be amused to learn that the Department of Homeland Security actually classifies the Sony studio as critical American infrastructure. So, at least in theory, agents of a foreign nation stand accused of deliberately attempting to destroy a critical American asset. In many contexts, that would mean war.
A broader legal question that is of more direct relevance to most lawyers is this: Given that almost all of America’s critical infrastructure is in private sector hands, how do we develop greater security of things that we need to keep safe? We might, for example, regulate the chemical industry and demand that they adopt protective measures. But in the cyber domain, threats (and responses) mutate so rapidly that a typical regulatory response seems too rigid.
What the Obama Administration has done, instead, is to draft a voluntary set of guidelines for the security of critical infrastructure (drafted by the National Institute of Standards and Technology and known as the NIST Cybersecurity Framework), and then let the legal market work its wonders. See NIST U.S. Dep’t of Commerce, Cybersecurity Framework—Framework Development, (updated March 4, 2014), http://www.nist.gov/cyberframework/cybersecurity-framework-framework-development.cfm. These voluntary standards are being used by regulators, tort lawyers, and insurers as measures of what a “reasonable person” or a “commercially reasonable” actor might do. And, increasingly, courts are likely to turn to these standards as well as a measure of reasonable behavior. In this way, the legal market is likely to form a critical driver in ramping up the cybersecurity of America’s infrastructure. When some nations face national security problems, they send the army. Only in America would we send in the lawyers.
Paul Rosenzweig is Principal at Red Branch Consulting, PLLC and a Professorial Lecturer in Law at George Washington University. He served as Deputy Assistant Secretary for Policy at the Department of Homeland Security from 2005-08. His video course, Thinking About Cybersecurity: From Cyber Crime to Cyber Warfare, is available from The Great Courses, produced by The Teaching Company. He may be reached at firstname.lastname@example.org.
Encryption Is a Cybersecurity Essential and Must Be Defended
by Adam K. Levin and J. Anthony Vittal
Unless you live in a log cabin on Loon Lake completely off the grid, practically every day we all learn of another data breach. Since January 1 of this year, there have been forty-seven reported breaches from governmental agencies, medical entities, and non-governmental organizations alone, yielding a total of 282,360 records.1 The largest and perhaps most informative was the theft of 101,000 records from the IRS using an automated cyberattack relying on information stolen elsewhere outside the IRS to generate e‑file PINs for stolen Social Security numbers. These figures are pretty tame in the face of announcements during 2015 by Anthem, Inc., Premera Blue Cross, Excellus BlueCross Blue Shield, and the U.S. Office of Personnel Management (to name a few) that databases storing some 120 million Social Security numbers had been compromised.
Before we get ready to uncork the champagne and declare that the state of cybersecurity in this nation has improved markedly, remember that we are only five months into 2016, and while one presidential candidate continues to urge the merits of building the Great Wall of Mexico, few if any have commented on the need to build a three-dimensional cyberwall to protect the United States’ various grids and the lives of our citizens.
Consider this: since 2005, significantly more than one billion files containing personally identifiable information have been compromised. Breaches have, indeed, become the third certainty in life.
Today’s cyberattacks present a dynamic threat—ever changing and morphing to suit the landscape of opportunity. Similarly, attacker profiles have shifted. State‑sponsored actors are becoming prominent, bringing the significant resources of their governments to bear. New goals and sources of motivation have fundamentally altered the nature of the threat landscape. There are two recent trends driving this shift.
The militarization of cyberattacks: Network penetrations to cause damage and steal intellectual property now are commonly state‑sponsored, with well-trained, highly sophisticated, disciplined, and persistent attackers, with access to resources such as training, computing power, and cutting‑edge research and development (R&D) not available to previous generations of hackers.
The rise of hacktivism: Hackers frequently attack organizations in the name of social causes and attempt to cause significant financial and reputational damage to a target. Two notably vicious examples of “cause” hacking are Sony Pictures and Ashley Madison.
At the same time, old school extortion continues to be a favorite gambit for hackers. In February of this year, hackers took control of the computers of Los Angeles‑based Hollywood Presbyterian Medical Center and demanded forty bitcoins (worth approximately $17,000) in exchange for the malware decryption key. According to a statement from the CEO of the Medical Center, “The malware locked access to certain computer systems and prevented us from sharing communications electronically.” Concluding that the “quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key,” the ransom was paid. Justin Wm. Moyer, After computer hack, L.A. hospital pays $17,000 in bitcoin ransom to get back medical records, Wash. Post (Feb. 18, 2016), https://www.washingtonpost.com/news/morning-mix/wp/2016/02/18/after-computer-hack-l-a-hospital-pays-17000-in-bitcoin-ransom-to-get-back-medical-records/.
Notwithstanding the proliferation of external threats and penetrations, insider threats present the greatest vulnerability to operators of computer systems and large data arrays. Privileged users pose the most dangerous risk due to their virtually unlimited access to an organization’s systems and data. One of the more infamous examples of leaks by a privileged user is former PVT Chelsea Manning’s theft and distribution of hundreds of thousands of classified military and State Department documents to the WikiLeaks organization, and the theft of terabytes of classified counter-terrorism information from the Swiss Intelligence Service by a disgruntled senior IT technician.
Cybersecurity efforts traditionally have focused on reconnaissance (to identify vulnerabilities) and perimeter security (initial points of entry to be blocked to the extent possible). Experts suggest the focus should continue to be on perimeter security, but also the prevention of escalation of privileges for—and the exploitation of those privileges by—an attacker (including insiders). Experts further suggest reducing the risk of all three types of insider threats (malicious, exploited, and careless) by enabling accountability, implementing least privileged access, and controlling sensitive data. There are two approaches to controlling sensitive data directly: prevent it from being exported out of your network using tools such as USB drives or even e-mail, but first encrypt it using strong encryption.
Breach notification statutes typically exempt encrypted data from mandated disclosure—regardless of the level or quality of encryption. Since 128-bit and 256-bit encryption algorithms long since have been hacked and had their decryption keys published, their use is as effective as no encryption at all to protect against a competent hacker. While more complicated encryption algorithms afford greater security, the current Apple cybersecurity debate highlights the utility of something as simple as a “kill switch,” while bringing to center stage the tension between national security and cybersecurity.
The dispute between Apple and the Department of Justice has made it to the front page of national print media and has become a cause célèbre in the cyberspace community, even though it is only emblematic of the disputes between law enforcement and device users around the world. This particular dispute concerns the passcode‑protected iPhone 5c used by Syed Rizwan Farook, one of the attackers in the San Bernardino, California rampage that left fourteen dead at the Inland Regional Center, where Mr. Farook worked. A similar dispute exists regarding the iPhone owned by the deceased leader of the November 13 attacks across Paris. The philosophical dispute is well framed by these opposing positions. For law enforcement: “You can’t have freedom without security.” François Molins, Chief Prosecutor, Republic of France. For users: “Encryption is either secure or it’s not.” Pavel Durov, co-founder of Telegram Messenger LLP (UK) and Telegram LLC (USA), headquartered in Berlin, which publishes the Telegram secure messaging app. 60 Minutes, CBS (March 13, 2016), http://www.cbsnews.com/videos/encryption-aid-in-dying-starchitect/.
The FBI claimed it did not know whether the Farook phone’s auto‑erase feature had been enabled, which would wipe the device’s memory after ten failed passcode login attempts. Even if the feature wasn’t enabled, iOS introduces passcode entry delays after six or more failed attempts. Because of these roadblocks, the FBI asked for Apple’s help to bypass the security features.2 Part of the problem may be that law enforcement may have been hoist by its own petard. San Bernardino County officials said they already were assisting the FBI, which requested that they reset the password for Mr. Farook’s iCloud account—to which the iPhone synchronized—at which point the bureau was no longer able to access the account. The operational security expert known in the Twitterverse as the Grugq tweeted that the move—perhaps the result of “panic and incompetence”—closed the front door that the FBI had to the shooter’s account data in the Cloud. Reuters reported that two senior Apple executives say the company had worked to help investigators and attempted multiple avenues, including sending engineers with FBI agents to a WiFi network that would recognize the phone and begin an automatic backup if that had been enabled. Dustin Volz & Julia Edwards, U.S., Apple ratchet up rhetoric in fight over encryption, Reuters (Feb. 22, 2016). The executives, who were not identified in the news service report, criticized government officials who reset the Apple identification associated with the phone, which eliminated the possibility of recovering information from it through an automatic cloud-based backup. Id.
The FBI then asked the Department of Justice for its help to unring the bell. On February 16, 2016 the government filed an application (Doc # 18) pursuant to the All Writs Act, 28 U.S.C. § 1651, “to assist law enforcement agents in enabling the search of a digital device seized in the course of a previously issued search warrant in” the matter styled In re Search of an Apple iPhone Seized During Execution of a Search Warrant on a Black Lexus IS300 etc., C.D. Cal. No. ED 15-0451M.3 Acting ex parte, Magistrate Judge Pym issued the proffered order mandating that Apple “assist in enabling the search” of the specified iPhone “by providing reasonable technical assistance to assist law enforcement agents in obtaining access to the data on the subject device.” The order then went on to specify in detail what the “reasonable technical assistance” was to accomplish: bypass or disable the phone’s auto-erase function; enable the FBI to obtain the passcode for the phone by a “brute force” application via the device port; and to circumvent the delay feature initiated when submitting too many incorrect passcodes. If that were not sufficient, the order specified in detail the manner in which Apple was to achieve the goals of the “reasonable technical assistance” unless “Apple determines that it can achieve the three [goals and the functionality specified in the order] using an alternate technological means from that recommended by the government, and the government concurs.”
Apple promptly responded by an open letter to its customers from its CEO, dated the same day as the order. Tim Cook, A Message to Our Customers, Apple (Feb. 16, 2016), http://www.apple.com/customer-letter/. Mr. Cook writes, “The U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone.” Apple’s objection is premised on the fact there is no way to guarantee that the backdoor—“a new version of the iPhone operating system, circumventing several important security features”—would be limited to the Farook device.
Apple’s concern is not fabricated. The government is seeking analogous relief in Brooklyn, where the Manhattan D.A. has at least 175 cases into which he wants to break, the LAPD and LASD alone have hundreds of phones they want opened, and the French government wants similar relief in the cases arising from the Paris attacks on Charlie Hebdo, the kosher supermarket, and La Defense and other venues last November. Apple therefore declined to build a backdoor into iOS to circumvent the passcode security, even for the Farook case.
A firestorm of public debate ensued. Kevin Bankston, director of New America’s Open Technology Institute, opined:
What the court is essentially ordering Apple to do is custom-build malware to undermine its own product’s security features, and then cryptographically sign that software so that the iPhone will trust it as coming from Apple. . . . [I]f a court can legally compel Apple to do that, then it likely could compel any other software provider to do the same, including compelling the secret installation of malware via automatic updates to your phone or laptop’s operating system or other software.
Rob Price, Forcing Apple to work with the FBI to unlock iPhones threatens the safety of the entire Internet, Business Insider (Feb. 17, 2016), http://www.businessinsider.com/apple-fbi-iphone-dangerous-precedent-force-tech-companies-introduce-malware-san-bernadino-2016-2?r=UK&IR=T. Bankston concluded, “A line must be drawn. ... The government must not be allowed to force tech companies to undermine the security of their own products, especially with nothing but a vague catch-all statute that’s over a century old [as the basis].”
The issue here, however, is not only the extent to which Judge Pym’s order is burdensome, but the extent to which it is proper. The issue is not whether Apple should assist the FBI with its existing tools, but whether it can be compelled to build new tools to do so. The case pending before Magistrate Judge Orenstein in the Eastern District of New York, in which the FBI was pressing for an order to use for its precedential value here, was decided adversely to the government on February 29.4
On February 19, the government filed a motion to Judge Pym (Doc # 1) seeking an order mandating compliance with her February 16 order.5 The government’s motion, and Apple’s subsequent motion for relief from the February 16 order, were set for hearing in March before Magistrate Judge Pym at the U.S. Courthouse in Riverside.
We submit that much is at stake here. Even the government is divided on the issues. Attorney General Lynch, addressing the annual RSA Conference in San Francisco last month, criticized Apple and asked, “Do we let one company, no matter how great the company, no matter how beautiful their devices, decide this issue for all of us?”6 On the other hand, Secretary of Defense Ashton Carter stated flatly, “I’m not a believer in backdoors,” and expressed concern that enforcement of Judge Pym’s order could have major implications for the Pentagon, which supplies employees with computers, smartphones, and other electronic devices outfitted with end-to-end encryption. A bi-partisan OpEd by U.S. Representatives Zoe Lofgren and Darrell Issa opined:
Allowing the government “backdoor” access to just this one phone would undo years of technological advances in online security. ...The emerging consensus [in Congress] is that a backdoor intended for use by law enforcement will inevitably, eventually be exploited by criminals. Creating this vulnerability would thus endanger Americans, giving not only government agents but also hackers access to our most intimate and carefully guarded personal information.
Instead of weakening privacy protections, lawmakers should support legislation—like that which passed the House with overwhelming support on three separate occasions—prohibiting government-mandated backdoors that intentionally undermine and undercut the development and deployment of strong data security technologies.
Reps. Zoe Lofgren & Darrell Issa, Op-Ed: Government ‘backdoor’ access to a single iPhone would undo years of progress in online security, L.A. Times A13 (Mar. 1, 2016).
It is unprecedented for a court to order a company to write new code to disable the security of an existing product. Indicative of the importance of this issue, as of March 11, seventy-six separate memoranda had been filed by amici curiae. Never before has any court said that a manufacturer must alter a device so the government can hack it.
We expect a right to the security and privacy of our devices and communications. That right permits us to defend against intrusions by governments and criminals alike. Any erosion of that right sends us down the slippery slope to the abyss of no privacy at all; the exposure of our data and communications to anyone demonstrating its discoverability, including tribunals and agencies of other nations where privacy is nonexistent, and to hackers who gain access to the backdoor; and exposes us to potential liability to our clients for failure to preserve at all peril to ourselves their secrets and confidences.
Editor’s note: This article was submitted on March 15, 2016. Subsequently, the FBI found a way to hack into Syed Rizwan Farook’s iPhone without Apple’s help. The government then asked a U.S. Magistrate judge in Riverside to vacate her order compelling Apple to assist the FBI in unlocking the iPhone.
Adam K. Levin is an inactive member of the New Jersey Bar, a former member of the Office of the Attorney General and Director of Consumer Affairs for the State of New Jersey, Chairman & Founder of IDT911 (IDentity Theft 911), Co-Founder of Credit.com, and a nationally recognized expert on privacy and data security. He is the author of Swiped: How to Protect Yourself in a World Full of Scammers, Phishers, and Identity Thieves (Public AffairsTM, an imprint of Perseus Books Group, 2015). J. Anthony Vittal is an active member of the State Bar of California, a former President of the Beverly Hills Bar Association, a past Chair of the Cyberspace Law Committee of the State Bar’s Business Law Section, a member of the Cyberspace Law Committee of the ABA Business Law Section, and a member of the Information Security Committee of the ABA Section on Science and Technology. Both speak and write extensively on issues of privacy and data security.
(The Idiot’s Guide to) Cybersecurity for Lawyers
by Denis Binder
No one person, company, or institution is immune from hacking. Most of us have to worry about identity theft and credit card fraud. Law firms also have to worry about the breach of confidential client information. There’s a saying that every home in Southern California has had or will have termites. It’s equally true today that every major law firm has been hacked, is being hacked, or will be hacked. The questions are not if, but when, hackers will attack a firm’s cybersecurity, and whether they will be successful.
Seemingly nothing is beyond the reach of remote hackers who have gotten into the Pentagon, State Department, United States Chamber of Commerce, Target, and even baby monitors, which have been turned into spy cameras in the house. Everyone is vulnerable.
As reported in the New York Times, the February 2015 internal report of the Citigroup Counter Intelligence Group on Law Firm Cybersecurity said law firms were at a high risk for cyberintrusions.1 The law firms are attractive targets because “they are repositories for confidential data on corporate deals and business strategies.”2 Thus, law firms and accounting firms can serve as a backdoor entry into internal client business affairs.
Law firms provide a target-rich environ often combined with lax cybersecurity. They possess highly confidential client information, such as trade secrets, business plans, and the sordid history of clients. This information can be very valuable to insider traders and competitors. Some intruders wish to access information while others wish to destroy information. The attacks can be highly sophisticated, often by state-sponsored hacking. The usual suspects are China,3 North Korea, and Russia. Assume that attempts will be made to hack into your files if your client is engaged in business transactions or litigation with a foreign enterprise.
Most large law firms have been the victims of hacking, but few are willing to go public for fear of revealing that confidential client information has been compromised. The fear is that the disclosure that a firm has been unable to maintain client confidentiality could be economically fatal. Large firms may be at greater risk in the sense that they represent more clients and possess greater confidential information than medium-size or smaller firms, but the reality is that all law firms are vulnerable.
Law firms are at legal risk if they fail to take reasonable steps to protect client security. The American Bar Association Model Rules of Professional Conduct imposes standards on attorneys. Model Rule 1.1 requires a lawyer to provide “competent representation” of clients. Comment 8 to the Rule requires the lawyer to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology . . . .” Model Rules of Prof’l Conduct, R. 1.1, cmt. 8.
Rule 1.6(c) requires a lawyer to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.” Comment 18 to Rule 1.6 provides a non-exclusive list of factors to be considered in the exercise of reasonable care. They include:
[T]he sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients (e.g., by making a device or important piece of software excessively difficult to use).
Model Rules of Prof’l Conduct, R. 1.6, cmt. 18.
These cases, though, would not be an ordinary tort or legal malpractice case for a lawyer. The law firm, which publicly litigates the reasonableness of the adequacy of its cybersecurity in allowing critical client information to be improperly accessed, is inviting a mass exodus of clients. The reality that the hack, which could have been initiated outside the country, may have violated California’s Computer Crime Law,4 will be of little economic solace to the firm.
Lawyers also need to be conversant with the provisions of the Health Insurance Portability and Accountability Act (HIPAA).5 They need to protect not only the medical records of their staff, but also their clients’ information that they process.
Let’s look at fourteen simple precautions that will not eliminate risk but may reduce security breaches. The precautions should apply across the board to all personnel, including the most senior partners.
Threats can be internal or external. Internal security breaches can be from disgruntled current or former employees seeking vengeance, as well as employees seeking an economic gain from selling information. It is critical to cut off former employees from Internet access as soon as their employment terminates. The situation is more complicated with current employees. For example, hospitals have experienced problems with staffers accessing the records of celebrity patients, and then selling the information.
Computer systems can be programmed to digitally log every entry into a set of files, but that step may only reveal the identity of the leaker after a breach has been made public. A stronger precaution for law firms—and businesses in general—is to limit access to individual cases or files on a need-to-know basis. In essence, each client’s file or case should be in an electronic “lock box” available only to those on an approved list.
External threats can be to access financial information and accounts, steal funds, engage in insider trading, gain a competitive advantage, discover incriminating information, or change or destroy documents. Adversarial law firms may also seek privileged litigation information on a case.
Law firms should have a cybersecurity plan in effect, as with emergency action plans in general, prior to a breach. Critical parts of the response are identification of how the breach occurred and the files that have been compromised. Steps can then be taken to prevent a recurrence of the breach. Outside experts, presumably contracted for in advance, should be brought in to examine the breach.
One final recommendation: Get Cyber Insurance. Absolute security cannot be guaranteed.
Denis Binder is a Professor at Chapman University, Dale E. Fowler School of Law, where he teaches Torts, Environmental Law, and Toxic Torts. He also has served as a consultant to a variety of organizations, ranging from the Army Corps of Engineers to Cesar Chavez and the United Farm Workers. He can be reached at email@example.com.