SECURITY

Screenshot-2023-09-29-at-10.05.03-AM.png

The Security Absolutist

All security practitioners know the Security Absolutist. It’s the practitioner who has a plan before the context, is unapologetic in their approach to security, and is unwaveringly confident in their solution. Seemingly always frustrated with the current state of security in business and consistently angry at why “people can’t just…” the Security Absolutist is a pained and frustrated individual, but we can help. Security Absolutism is a dangerous game, constantly creating conflict and boundarie

Screenshot-2024-08-08-at-4.15.31-PM.png

Preventing Overreliance: Proper Ways to Use LLMs

LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

Picture1.png

Ignore Previous Instruction: The Persistent Challenge of Prompt Injection in Language Models

Prompt injections are an interesting class of emergent vulnerability in LLM systems. It arises because LLMs are unable to differentiate between system prompts, which are created by engineers to configure the LLM’s behavior, and user prompts, which are created by the user to query the LLM. Unfortunately, at the time of this writing, there are no total mitigations (though some guardrails) for Prompt Injection, and this issue must be architected around rather than fixed. In this blog post, we will

connor-mollison-3rkosR_Dgfg-unsplash.jpg

Introduction to LLM Security

In the dynamic world of AI today, Large Language Models (LLMs) stand out as one of the most interesting and capable technologies. The ability to answer arbitrary prompts has numerous business use cases. As such, they are rapidly being integrated into a variety of different applications. Unfortunately, there are many security challenges that come with LLMs that may not be well understood by engineers. Here at Cloud Security Partners, we’ve performed several engagements on applications that integ

background.png

The Security Benefits of Infrastructure as Code

We have developed and delivered new ways to deliver infrastructure quickly and without these misconfigurations. Prevention is the only cure; we’ll talk about how you can implement this today.

nils_public_rds_security_open_door_castle_cloud_4k_future_71d20f54-e378-40e2-9c5f-95455aff475e.png

RDS Revealed? Time to Give It Some Shade!

By: John Poulin At Cloud Security Partners, we have audited thousands of customer AWS accounts as part of our security reviews. Across our customers, roughly 5% of the AWS Relational Database Service (RDS) instances we analyze are publicly accessible. A general rule of thumb across the security industry is that resources generally should not be directly accessible on the Internet, especially databases. More often than not, resources can be deployed behind controls, such as Load Balancers, Priva

Subscribe



Subscribe to Cloud Security Partners Blog

Don't miss out on the latest news. Sign up now to get access to the library of members-only articles.

Subscribe