AWS

The Security Benefits of Infrastructure as Code
We have developed and delivered new ways to deliver infrastructure quickly and without these misconfigurations. Prevention is the only cure; we’ll talk about how you can implement this today.

OIDC for GitHub Actions
At Cloud Security Partners, we perform a lot of code reviews and Cloud Security Assessments. During these engagements, we see many different CI/CD patterns that cause us to raise our eyebrows. One situation in particular that we encounter relatively often is the unsafe use of AWS credentials. The CIS Benchmark for AWS indicates that Access Keys must be rotated every 90 days. And generally, IAM users should be avoided, instead roles should be utilized. OpenID Connect is an authentication standard
Show More >
IAM

Our Support For Cloudsplaining
We’re proud to announce that Cloud Security Partners will be forking and maintaining Cloudsplaining, the popular cloud IAM tool. Open source and giving back to the community are very important to us and something we try to do often via contributions and free training! The cloud security community has built some amazing tools from Prowler to Parliment and obviously, Cloudsplaining. Cloudsplaining plays an important role in that it gives security teams insight into their IAM policies and possible

Finding Strings Everywhere with Roles Anywhere
While scrolling Twitter, I came across this tweet talking about the new AWS feature Roles Anywhere. I was messing around with the aws_signing_helper and got this panic. The trace path doesn't make me feel super confident about the security of their build process. Not that I was happy about the "download this from a random S3 bucket" distribution method either. pic.twitter.com/B58g8fOk49 — David Adams (@daveadams) July 13, 2022 Roles Anywhere is a new way to use IAM roles on systems that aren
Show More >
CLOUD

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

The Security Benefits of Infrastructure as Code
We have developed and delivered new ways to deliver infrastructure quickly and without these misconfigurations. Prevention is the only cure; we’ll talk about how you can implement this today.
Show More >
NEWS

Upcoming Events at CSP!
We're starting off the year with a few big events we're speaking and training at. Get ready for a deep dive into the latest in cloud computing and cybersecurity with our very own experts, Mike McCabe and John Poulin. Mike McCabe at Cloud Connect - DeveloperWeek First up, Mike McCabe is speaking at Cloud Connect, part of DeveloperWeek, in February. He's going to cover some critical aspects of cloud computing around Terraform and IAC security. He'll cover how you can use Terraform to gain acces

Infrastructure as Code Security
I was excited to have the opportunity to speak recently at Kernelcon and BSidesNYC about one of my favorite topics, infrastructure as code (IAC). Having helped multiple companies build IAC security programs, talking about what we've learned is always enjoyable. Companies moving to centralized and well-managed infrastructure as code pipelines with built-in security controls is a massive security win. However, utilizing these tools comes with certain risks that we must manage. As I outlined in m
Show More >
IAC

The Security Benefits of Infrastructure as Code
We have developed and delivered new ways to deliver infrastructure quickly and without these misconfigurations. Prevention is the only cure; we’ll talk about how you can implement this today.

LASCON Recap - Infrastructure as Code
Recently, we had the privilege of participating in and sponsoring the Lonestar Application Security Conference (LASCON). Our CEO, Michael McCabe, and Ken Toler delivered a training session and a talk on exploiting Terraform for remote code execution; both received a fantastic turnout. In between operating our booth, we had the opportunity to attend some insightful talks. During the event, one presentation that stood out was delivered by Bug Bounty and focused on how to manage a bug bounty progr
Show More >
TERRAFORM

The Hidden Dangers of Using Terraform's Remote-Exec Provisioner
Terraform is a powerful infrastructure as code tool that can support multi-cloud deployments. Terraform provides consistent and reliable deployments for cloud infrastructure. But as with every tool there are hidden dangers built-in we need to check for! The remote-exec provisioner in Terraform can be a valuable tool, providing the ability to execute scripts and commands on remote resources. However, it can pose significant security risks to your infrastructure without proper control and awarene

The Security Benefits of Infrastructure as Code
We have developed and delivered new ways to deliver infrastructure quickly and without these misconfigurations. Prevention is the only cure; we’ll talk about how you can implement this today.
Show More >
INCIDENT RESPONSE

Don't let your containers escape! Update runc & Docker Now!
TL;DR: Update runc and associated software (such as Docker) to the latest version to address several container breakout vulnerabilities. The security research team at Snyk recently disclosed vulnerabilities in runc <= 1.11.11, which can result in container escapes. Container escaping allows for access to the host operating system, reducing the security boundary of the container runtime. These vulnerabilities could be exploited through the execution of a malicious image or by building an image w

Exploring Amazon Athena in Incident Response: A Practical Approach
Recently, our team was pulled into an incident response engagement. As part of the breach investigation, we needed to review months of extensive nginx log files stored on Amazon S3 to determine an application issue causing data leakage. Complicating matters, we had no access to our traditional SIEM tools, prompting us to explore alternative solutions. We explored leveraging Amazon Athena to directly query the logs stored in S3. The post will showcase Amazon Athena's relevance in Incident Respon
Show More >
SECURITY

The Security Absolutist
All security practitioners know the Security Absolutist. It’s the practitioner who has a plan before the context, is unapologetic in their approach to security, and is unwaveringly confident in their solution. Seemingly always frustrated with the current state of security in business and consistently angry at why “people can’t just…” the Security Absolutist is a pained and frustrated individual, but we can help. Security Absolutism is a dangerous game, constantly creating conflict and boundarie

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th
Show More >
INFOSEC

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

Ignore Previous Instruction: The Persistent Challenge of Prompt Injection in Language Models
Prompt injections are an interesting class of emergent vulnerability in LLM systems. It arises because LLMs are unable to differentiate between system prompts, which are created by engineers to configure the LLM’s behavior, and user prompts, which are created by the user to query the LLM. Unfortunately, at the time of this writing, there are no total mitigations (though some guardrails) for Prompt Injection, and this issue must be architected around rather than fixed. In this blog post, we will
Show More >
LLM

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

Ignore Previous Instruction: The Persistent Challenge of Prompt Injection in Language Models
Prompt injections are an interesting class of emergent vulnerability in LLM systems. It arises because LLMs are unable to differentiate between system prompts, which are created by engineers to configure the LLM’s behavior, and user prompts, which are created by the user to query the LLM. Unfortunately, at the time of this writing, there are no total mitigations (though some guardrails) for Prompt Injection, and this issue must be architected around rather than fixed. In this blog post, we will
Show More >
AI

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

Ignore Previous Instruction: The Persistent Challenge of Prompt Injection in Language Models
Prompt injections are an interesting class of emergent vulnerability in LLM systems. It arises because LLMs are unable to differentiate between system prompts, which are created by engineers to configure the LLM’s behavior, and user prompts, which are created by the user to query the LLM. Unfortunately, at the time of this writing, there are no total mitigations (though some guardrails) for Prompt Injection, and this issue must be architected around rather than fixed. In this blog post, we will
Show More >
GENAI

Introduction to LLM Security
In the dynamic world of AI today, Large Language Models (LLMs) stand out as one of the most interesting and capable technologies. The ability to answer arbitrary prompts has numerous business use cases. As such, they are rapidly being integrated into a variety of different applications. Unfortunately, there are many security challenges that come with LLMs that may not be well understood by engineers. Here at Cloud Security Partners, we’ve performed several engagements on applications that integ

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th
Show More >
CHATGPT

Preventing Overreliance: Proper Ways to Use LLMs
LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

Introduction to LLM Security
In the dynamic world of AI today, Large Language Models (LLMs) stand out as one of the most interesting and capable technologies. The ability to answer arbitrary prompts has numerous business use cases. As such, they are rapidly being integrated into a variety of different applications. Unfortunately, there are many security challenges that come with LLMs that may not be well understood by engineers. Here at Cloud Security Partners, we’ve performed several engagements on applications that integ
Show More >