As organizations increasingly rely on cloud technologies, open-source software, and explore the potential of AI, the importance of robust security practices has never been greater. Still, each of these technologies has its own distinct domain, and it is easy to overlook best practices. In this episode, ControlPlane CEO Andrew Martin helps us connect the dots between securing these critical technologies to build more secure, resilient systems.
Edited transcription
With a distinguished career spanning both a military government agency and the tech industry, ControlPlane CEO Andrew Martin holds security as a design requirement, not an afterthought. “When people ask me, ‘what do you do? What is security?’ My answer is always, it’s just DevOps with security requirements as acceptance criteria,” he says. “Mapping systemic threats back to existential business risks,” he believes, helps him convey the relevance of security to his clients.
Balancing speed and security in DevOps
The growing demand for full-stack, full-spectrum developers can lead to security oversights. Developers are expected to “focus on getting something out of the load balancer rather than looking at application level security,” says Andrew. Organizations rushing to adopt cloud technologies without fully understanding how to set them up properly has become a prevalent issue in cloud security. In this regard, worst practices include using default credentials, overly permissive access controls, lack of encryption for data at rest or in transit, and exposing sensitive services to the public internet.
To prevent this issue, Andrew recommends, at the earliest stage of implementation, to “have a security team or champion that can help to threat model, bring all the stakeholders together so that it’s clear, multiple different views are being respected, and it’s not just a minimum path to production.” Moreover, as best practices, Andrew suggests implementing “pipelines that build out all that good DevSecOps stuff,” including:
- Linting and value analysis for configuration files (like YAML for Kubernetes)
- Automated testing and scanning in the deployment pipeline
- Implementing “best practice, secure by default deployments of infrastructure’s code modules”
To Andrew’s understanding, it is only then that you can focus on how secure the software itself is. At this stage, his angle comes down to “just assume breach”: “We assume that the application is compromised and then the topology of the application that’s being built is then built with resilience in mind.” In this way, the mindset for building secure software is “trying to build a system that hits as many compromises as possible, so that the client is still able to get something to production quickly, with quality testing, both the system, the application, and the security of everything that goes with it, and balancing those compromises that are inherent in life.”
Likewise, CV-driven development, in which individuals put the development of their skills and tech stack preferences as priority, can affect security in exchange for speed. Placing “intellectual stimulation over business value,” Andrew explains, has its place at both ends of the development spectrum.
On one end, you might have a highly skilled “10x developer” who rapidly builds complex systems, potentially helping the business hit revenue targets. However, this approach often comes at the cost of security considerations, as there’s “no time to stop and breathe and consider what are the potential side effects of this type of behavior.” On the other hand, Andrew describes organizations that prioritize “absolutely technically correct architecture.” While these systems may be technically beautiful, they risk failing in the market if they don’t attract customers.
Securing the supply chain in open source software
Leveraging open-source software allows companies to reduce their costs significatively. Still, they need to be aware of the security implications of this kind of software. According to Andrew, with open-source software, “the whole process of supply chain security is the producer-consumer problem; anytime you modify something and share it, you become a producer,” and recalls that “we went from everybody happily trusting everything they throw to production so we now need to trust but verify or, in fact, verify before we even trust.”
Securing the supply chain has led to the development of software composition analysis and Software Bills of Materials (SBOMs). However, the current methods are not enough. SBOMs don’t always require hashes, and package URLs can point to mutable sources. Besides, version-based scanning is fallible due to the dynamic nature of repositories.
As a way to secure the supply chain Andrew lists a variety of solutions. The idea is to treat open source security as a form of “externalized insider threats,” and, ultimately, assume that perfect security is impossible, so focus on detection, remediation, and resilience:
Detection
- Hash dependencies and conduct thorough scans of the source tree to detect known implant signatures.
- Use dedicated tools to analyze code for suspicious patterns or behaviors.
Remediation
- Deploy applications in containers with restricted permissions and run Linux security modules like Seccomp, AppArmor, and SELinux.
- Ensure that Security Operations Centers (SOCs) and Security Information and Event Management (SIEM) systems are in place.
Resilience
- Implement application-level segregation and micro-segmentation to isolate sensitive components.
- Prioritize security throughout the development process, building it into the architecture from the beginning.
- Ensure that systems can be quickly restored from backups.
While there isn’t a single, comprehensive framework, some tools and practices are emerging to address supply chain security in the open-source world. One way to achieve so is by reproducibility, the ability to consistently recreate a software build or environment. Linux distribution NixOS and its package manager are notable examples, offering reproducibility by default. Withal, as Andrew points out, reproducibility goes beyond the tooling: “Reproducibility also comes down to dispelling non-determinism. So are you using any sort of timestamp anywhere in your build log outputs?… Are you using a different locale from one place to another?” Once these inconsistencies are eliminated, reproducibility becomes more achievable.
Withal, reproducibility alone isn’t a silver bullet. “The ultimate question becomes, is reproducibility the answer or is it one of the symptoms of a solution? And unfortunately, I would love it to be the answer,” Andrew confesses. It’s essential to consider supply chain attacks, which can occur even with reproducible builds. Compromised build systems and the ingestion of untrusted third-party code are two primary concerns.
To mitigate these risks, organizations can implement duplicate pipelines, building the same artifact in a separate environment can detect compromised build infrastructure. They must also evaluate vulnerabilities critically: Instead of blindly halting production due to CVEs, assess the actual exploitability and apply necessary mitigations. Lastly, Andrew also recommends using VEX documents as security advisory to provide clear information about vulnerabilities and their mitigations to address concerns from security scanners.
Integrating Gen AI into software supply chains
The presence of AI in software development is inevitable. “All of our clients have had CEO-driven initiatives to consume some form of AI system,” Andrew says, and comments on the very specific security issues AI comes with. AI models, especially LLMs, often incorporate stochastic elements that make their behavior unpredictable. Also, AI models can inadvertently leak sensitive data or be compromised by malicious actors. However, it is challenging to verify the source and quality of data used to train AI models. As such, Andrew calls to be wary when introducing AI into existing systems and carefully consider governance frameworks and compliance regulations.
To securely implement AI, Andrew calls for combining software and policy strategies:
- Implementing firewalls, prompt filtering, and other software security controls.
- Developing methods to assess the behavior and potential biases of AI models.
- Establishing processes to track the origin and quality of data used to train models.
- Creating guidelines and policies to ensure responsible AI development and use.
The bottom line
Follow Andrew on X at @sublimino and on Linkedin.
Visit control-plane.io to learn more about its approach to cybersecurity and check out its blog for the latest news and strategies.
Originally published at https://semaphoreci.com on August 27, 2024.