Dr. Avi Rubin
Since Harbor Labs performed its first medical device cybersecurity assessment in late 2015, there has been a consistent and obvious trend in the origins of our client engagements. More than half of all clients (62%) who have contracted with Harbor Labs for an assessment in support of a regulatory submission did so after one of the following conditions had occurred:
- Negative premarket feedback
- Rejected submission
- Discovery of a postmarket vulnerability
Because each of these rejections represents misspent time and resources by both manufacturers and regulators, and in some cases imposed avoidable delays in getting clinical products to market, it is worth examining the leading factors that result in a rejected regulatory submission.
Our cyberscience staff recently met to compare experiences and identify the most common issues we have observed that ultimately led to bad regulatory outcomes. The following observations are anecdotal, but are nonetheless real-world examples of the more common assumptions and procedural oversights made by device manufacturers in their submissions that we have seen lead to FDA rejection:
“We use a proprietary communications protocol, which is in itself a sufficient security measure. “
Examiners have consistently rejected this argument. Proprietary mechanisms are security by obscurity. An attacker with unbounded time can circumvent them, a fact that has been proven through research by FDA staff themselves. Only use well-vetted, known, secure cryptographic algorithms, methods, or systems.
“After performing our own internal risk assessment and threat model, we concluded that a penetration test was not necessary to support our 510(k) submission. “
A risk assessment that does not trace or include cybersecurity testing will likely be rejected. All medical devices that incorporate software should include comprehensive cybersecurity testing. Moreover, the testing should trace back to cybersecurity requirements, and the requirements back to the risk assessment.
“We provided regulators with exhaustive documentation and our submission was still rejected.”
This typically occurs when different elements within a large organization produce the components of a submission independently. The content can become fragmented, and very often lacks traceability. Avoid providing document dumps that overwhelm the examiner. Logical sequencing, format consistency, organization, and cross-document traceability are more compelling than volume.
“Regulators questioned the qualifications of my testing group.”
It is not enough to simply provide a table of test descriptions with each column marked “Passed”. Examiners want to know who specifically performed the tests, details on the analysis performed, and the credentials of the test engineers. Identify your testing group by name and wherever possible, include their attestation letter in your submission.
“We don’t need to perform robustness testing (DoS, DDoS) because our service platform or cloud provider offers this protection as part of their subscription service.”
It is important to understand whether the cloud resources where medical software and data reside automatically scale and provide firewall and anomaly detection functionality. When it is determined that these services are not provided by a particular cloud resource (an AWS EC2 instance, for example), robustness testing should be performed and included in the submission. In most cases, executing robustness testing will also require the permission of the cloud provider. In any case, when a component of a medical system resides in the cloud, vendor documentation specifically identifying the cloud service being used and citing the security features it provides is essential for the reviewer.
“Our secure communication protocol was rejected.”
Nonsecure protocols such as Telnet and FTP are obvious red flags to an examiner. But, even secure protocols can be rejected if they are subject to a deprecated or insecure configuration. A common example is TLS 1.2, which is considered insecure because it supports many insecure or deprecated ciphers in its cipher suite.
It is often the case that public clouds and third-party web services and platforms do not support the latest version of TLS. In this case, it must be documented and made clear that the manufacturer, as the client, will negotiate the highest protocol and strongest cipher suite.
“We had a cybersecurity consultant perform in-depth, independent pen testing but the examiner felt the test data lacked sufficient detail.”
Submissions are sometimes rejected because they lack a specific type of cybersecurity test. At a minimum, testing should include penetration testing, static analysis, dynamic analysis, fuzz testing, and robustness testing. A true pen test that comprises only the output of COTS pen test tools will rarely be sufficient for approval. Customized testing that exercises the unique clinical features of a device is more likely to demonstrate the thoroughness and thoughtfulness that examiners are looking for.
About Dr. Avi Rubin
Chief Scientist, Harbor Labs
Dr. Rubin is the founder and director of the Johns Hopkins University Health and Medical Security Lab where his work is advancing medical device security and future healthcare networks. He is a Professor of Computer Science at Johns Hopkins University where his coursework is developing the next generation of medical security professionals. Dr. Rubin has testified on national healthcare cybersecurity policy before the U.S. House and Senate on multiple occasions and has authored several books on computer security. He is a frequent keynote speaker at industry and academic conferences and delivered widely viewed TED talks in 2011 and 2015. He holds a Ph.D. from the University of Michigan in Applied Cryptography and Computer Security.
Related Insights
Guidelines for Source Code Comparison in Litigation
Harbor Labs Director of Firmware Security Dr. Paul Martin describes the strategies, tools, and methodologies used at Harbor Labs when performing source code comparisons in support of litigation consulting and investigation engagements.
Guidelines for Source Code Quality Assessments
Dr. Paul Martin describes the strategies and computer science disciplines involved in performing a code quality assessment, and how these processes can be used to produced a defensible, evidence-based conclusion on the coding quality of a target codebase.
Regulatory Science Meets Cyber Science; Why It’s So Much More than a Pen Test
HarborLabs CEO Nick Yuran distinguishes cybersecurity from cyberscience, and explains why understanding the shared scientific disciplines of regulators and security professionals are important in achieving positive regulatory outcomes.