Sign up for The Media Today, CJRâs daily newsletter.
It’s always easier to make a mess than to clean one up. Here at KQED, an NPR and PBS member station in San Francisco and one of the largest public media companies in the US, we spent months in a Rube Goldberg machine of byzantine workflows and inconvenient workarounds after a cyberattack disabled our computing systems this summer.
Hackers, who might be from the other side of the world or just down the block, penetrated our computer network and installed ransomware that encrypted our files, software, and servers. While, as I detailed in a post for KQED, we were able to fend off the attack, the experience taught us about our vulnerabilities, lessons that may be helpful to other news organizations around the country.
ICYMI:Â A feud between two media giants
With the malware spreading, our IT staff began shutting down critical systems. The phones went dead. The internet vanished. Dozens of hours of audio interviews disappeared. The perpetrators demanded thousands of dollars in bitcoin for the safe return of our files, which we did not pay: The FBI and computer security consultants counseled that is a foolâs game.
Our IT staff fixed the most fundamental problems within a few days. But they could not quickly rig an entirely new network setup that would close the security holes. We have been hobbling toward recovery ever since.
 Our network systems engineer rated the stress as âway worseâ than that caused by a past bout he had with cancer.
On the bright side, very little disruption was visible to our audiences on TV, the web, and radio. Plus, we learned a thing or two about crisis management, lesson number one being to try to avoid the crisis in the first place. Any heavily networked organization lives somewhere on a continuum between absolute security and unfettered convenience. At KQED, we leaned heavily toward the latter.
ICYMI:Â Ouch! Here are headlines editors probably wish they could take back
At least some of the things that made KQED vulnerable to attack are habits that are likely going on, right now, at media companies across the US:
- Allowing staff to download and install software from the internet
- Deploying old-school antivirus programs that could be tricked by malicious code
- Maintaining an IT-to-staff ratio significantly below the industry average of 6.5 percent of all employees
- Grouping together different platforms we use for TV, radio, and other operations under one domain, so that employees could use the same user ID and password for each
All of these holes have been or are in the process of being plugged. But nothing stays foolproof for long, and you, too, may find yourself staring at a grammatically maladroit on-screen ransom note while futilely attempting to Ctrl-Alt-Delete your computer back to life. If that day comes, you and your organization should keep these six things in mind:
- When nothingâs working, triage is critical.
In KQEDâs case, our IT staff focused on checking and securing membership data (it was off-site so unbreached), paying staff and bills, and keeping daily radio news on the air. What had to wait: Networked printing, high-speed internet, and access to stored documents and audio. - Protect your IT staff.
Ours found themselves working 12-to-20-hour shifts while facing a deluge of help requests from several hundred bewildered employees. To relieve that pressure, the company established a walk-in help desk in a central area of the building, freeing network administrators to work on the main problems. Be aware, too, of the enormous burden shouldered by those the organization is counting on to fix things. Our network systems engineer rated the stress as âway worseâ than that caused by a past bout he had with cancer. -  Donât over-promise, but  do over-communicate.
As the aftereffects of the attack lingered, projections for specific fixes turned out to be overly optimistic. This in turn created more stress, especially for the radio news team, which was suffering from âworkaround fatigue,â as one anchor put it. The establishment of a daily morning briefing by the head of IT to editorial team managers helped bridge the gap. - Reach out to other organizations.
After the first media report about the attack ran in the San Francisco Chronicle, other news organizations contacted KQED to commiserate and swap ideas on how to survive, said Ethan Lindsey, the managing editor of KQED News. âI wish I had those connections before, so I could call up and go, âHey this is what weâre suffering, what should we do?ââ he said. - When your network is down, the cloud is the silver lining.
Because production tools are typically available online these days, the disabling of in-house systems does not have to turn your operations into The Day the Earth Stood Still. Cloud-based applications like Slack, Google Drive, and Dropbox can substitute for networked communications, storage, and file transfers. None of our websites went down because WordPress is hosted in the cloud. Think about incorporating cloud storage options into your workflows now. While the retrieval of many files from in-house servers was delayed, those employees who stored files in the cloud did not miss a beat. - Prepare for internal debates and negotiations.
Now that weâre basically up and running again, the discussion about how to balance security and convenience has begun. IT has walled off the news team’s audio management system from the internet, a less fluid but more secure production environment. The pendulum has swung decidedly toward security, Lindsey said, and there will be more discussions to come on reaching a happy medium.
Finally, later this month, KQED will take an additional step to prepare for the next time: Managers will game out solutions in response to hypothetical crisis scenarios, an idea that predated the attack.
âWeâve been wanting to do this for forever,â said Lindsey.
ICYMI:Â The shocking observation a HuffPost editor made about recent sexual harassment stories
Has America ever needed a media defender more than now? Help us by joining CJR today.