AI

Screenshot-2024-08-08-at-4.15.31-PM.png

Preventing Overreliance: Proper Ways to Use LLMs

LLMs have a very uncanny ability of being able to solve problems in a wide variety of domains. Unfortunately, they also have a tendency to fail catastrophically. While an LLM may be able to provide accurate responses 90% of the time, due to nondeterministic behavior, one must be prepared for cases when it gives blatantly wrong or malicious responses. Depending on the use case, this could result in hilarity or, in very bad cases, security compromises. In this blog post, we’ll talk about #9 on th

Picture1.png

Ignore Previous Instruction: The Persistent Challenge of Prompt Injection in Language Models

Prompt injections are an interesting class of emergent vulnerability in LLM systems. It arises because LLMs are unable to differentiate between system prompts, which are created by engineers to configure the LLM’s behavior, and user prompts, which are created by the user to query the LLM. Unfortunately, at the time of this writing, there are no total mitigations (though some guardrails) for Prompt Injection, and this issue must be architected around rather than fixed. In this blog post, we will

connor-mollison-3rkosR_Dgfg-unsplash.jpg

Introduction to LLM Security

In the dynamic world of AI today, Large Language Models (LLMs) stand out as one of the most interesting and capable technologies. The ability to answer arbitrary prompts has numerous business use cases. As such, they are rapidly being integrated into a variety of different applications. Unfortunately, there are many security challenges that come with LLMs that may not be well understood by engineers. Here at Cloud Security Partners, we’ve performed several engagements on applications that integ

-4dd1-4b9f-b55d-07c3b7ffbfa4.png

Gen AI Security: An Introduction and Resource Guide

Like many industries, Artificial Intelligence has taken the security industry by storm. Security practitioners now are faced with the challenge of understanding new classifications of threats and new techniques of attack. Threat Actors are utilizing AI to improve their attacks, while also exploiting AI services. AI and Generative AI utilize many types of new technologies to build services that are used to improve efficiency and offer new solutions to problems of the past. Of course, along with t

Subscribe



Subscribe to Cloud Security Partners Blog

Don't miss out on the latest news. Sign up now to get access to the library of members-only articles.

Subscribe