This post is co-written with Maciej Mensfeld from Mend.io.
In the ever-evolving landscape of cybersecurity, the ability to effectively analyze and categorize Common Vulnerabilities and Exposures (CVEs) is crucial. This post explores how Mend.io, a cybersecurity firm, used Anthropic Claude on Amazon Bedrock to classify and identify CVEs containing specific attack requirements details. By using the power of large language models (LLMs), Mend.io streamlined the analysis of over 70,000 vulnerabilities, automating a process that would have been nearly impossible to accomplish manually. With this capability, they manage to reduce 200 days of human experts’ work. This also allows them to provide higher quality of verdicts to their customers, allowing them to prioritize vulnerabilities better. It gives Mend.io a competitive advantage. This initiative not only underscores the transformative potential of AI in cybersecurity, but also provides valuable insights into the challenges and best practices for integrating LLMs into real-world applications.
The post delves into the challenges faced, such as managing quota limitations, estimating costs, and handling unexpected model responses. We also provide insights into the model selection process, results analysis, conclusions, recommendations, and Mend.io’s future outlook on integrating artificial intelligence (AI) in cybersecurity.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.
Mend.io is a cybersecurity company dedicated to safeguarding digital ecosystems through innovative solutions. With a deep commitment to using cutting-edge technologies, Mend.io has been at the forefront of integrating AI and machine learning (ML) capabilities into its operations. By continuously pushing the boundaries of what’s possible, Mend.io empowers organizations to stay ahead of evolving cyber threats and maintain a proactive, intelligent approach to security.
Uncovering attack requirements in CVE data
In the cybersecurity domain, the constant influx of CVEs presents a significant challenge. Each year, thousands of new vulnerabilities are reported, with descriptions varying in clarity, completeness, and structure. These reports, often contributed by a diverse global community, can be concise, ambiguous, or lack crucial details, burying critical information such as attack requirements, potential impact, and suggested mitigation steps. The unstructured nature of CVE reports poses a significant obstacle in extracting actionable insights. Automated systems struggle to accurately parse and comprehend the inconsistent and complex narratives, increasing the risk of overlooking or misinterpreting vital details—a scenario with severe implications for security postures.
For cybersecurity professionals, one of the most daunting tasks is identifying the attack requirements—the specific conditions and prerequisites needed for a vulnerability to be successfully exploited—from these vast and highly variable natural language descriptions. Determining whether attack requirements are present or absent is equally crucial, as this information is vital for assessing and mitigating potential risks. With tens of thousands of CVE reports to analyze, manually sifting through each description to extract this nuanced information is impractical and nearly impossible, given the sheer volume of data involved
The decision to use Anthropic Claude on Amazon Bedrock and the advantages it offered
In the face of this daunting challenge, the power of LLMs offered a promising solution. These advanced generative AI models are great at understanding and analyzing vast amounts of text, making them the perfect tool for sifting through the flood of CVE reports to pinpoint those containing attack requirement details.
The decision to use Anthropic Claude on Amazon Bedrock was a strategic one. During evaluations, Mend.io found that Although other LLMs like GPT-4 also showed strong performance in analyzing CVE descriptions, Mend.io’s specific requirements were better aligned with Anthropic Claude’s capabilities. Mend.io used tags like <example-attack-requirement>. When Mend.io evaluated other models with both structured and unstructured prompts, Anthropic Claude’s ability to precisely follow the structured prompts and include the expected tags made it a better fit for Mend.io’s use case during their testing.
Anthropic Claude’s unique capabilities, which allows the recognition of XML tags within prompts, gave it a distinct advantage. This capability enabled Mend.io to structure the prompts in a way that improved precision and value, ensuring that Anthropic Claude’s analysis was tailored to Mend.io’s specific needs. Furthermore, the seamless integration with Amazon Bedrock provided a robust and secure platform for handling sensitive data. The proven security infrastructure of AWS strengthens confidence, allowing Mend.io to process and analyze CVE information without compromising data privacy and security—a critical consideration in the world of cybersecurity.
Crafting the prompt
Crafting the perfect prompt for Anthropic Claude was both an art and a science. It required a deep understanding of the model’s capabilities and a thorough process to make sure Anthropic Claude’s analysis was precise and grounded in practical applications. They composed the prompt with rich context, provided examples, and clearly defined the differences between attack complexity and attack requirements as defined in the Common Vulnerability Scoring System (CVSS) v4.0. This level of detail was crucial to make sure Anthropic Claude could accurately identify the nuanced details within CVE descriptions.
The use of XML tags was a game-changer in structuring the prompt. These tags allowed them to isolate different sections, guiding Anthropic Claude’s focus and improving the accuracy of its responses. With this unique capability, Mend.io could direct the model’s attention to specific aspects of the CVE data, streamlining the analysis process and increasing the value of the insights derived.
With a well-crafted prompt and the power of XML tags, Mend.io equipped Anthropic Claude with the context and structure necessary to navigate the intricate world of CVE descriptions, enabling it to pinpoint the critical attack requirement details that would arm security teams with invaluable insights for prioritizing vulnerabilities and fortifying defenses.
The following example illustrates how to craft a prompt effectively using tags with the goal of identifying phishing emails:
The challenges
While using Anthropic Claude, Mend.io experienced the flexibility and scalability of the service firsthand. As the analysis workload grew to encompass 70,000 CVEs, they encountered opportunities to optimize their usage of the service’s features and cost management capabilities. When using the on-demand model deployment of Amazon Bedrock across AWS Regions, Mend.io proactively managed the API request per minute (RPM) and tokens per minute (TPM) quotas by parallelizing model requests and adjusting the degree of parallelization to operate within the quota limits. They also took advantage of the built-in retry logic in the Boto3 Python library to handle any occasional throttling scenarios seamlessly. For workloads requiring even higher quotas, the Amazon Bedrock Provisioned Throughput option offers a straightforward solution, though it didn’t align with Mend.io’s specific usage pattern in this case.
Although the initial estimate for classifying all 70,000 CVEs was lower, the final cost came in higher due to more complex input data resulting in longer input and output sequences. This highlighted the importance of comprehensive testing and benchmarking. The flexible pricing models in Amazon Bedrock allow organizations to optimize costs by considering alternative model options or data partitioning strategies, where simpler cases can be processed by more cost-effective models, while reserving higher-capacity models for the most challenging instances.
When working with advanced language models like those provided by AWS, it’s crucial to craft prompts that align precisely with the desired output format. In Mend.io’s case, their expectation was to receive straightforward YES/NO answers to their prompts, which would streamline subsequent data curation steps. However, the model often provided additional context, justifications, or explanations beyond the anticipated succinct responses. Although these expanded responses offered valuable insights, they introduced unanticipated complexity into Mend.io’s data processing workflow. This experience highlighted the importance of prompt refinement to make sure the model’s output aligns closely with the specific requirements of the use case. By iterating on prompt formulation and fine-tuning the prompts, organizations can optimize their model’s responses to better match their desired response format, ultimately enhancing the efficiency and effectiveness of their data processing pipelines.
Results
Despite the challenges Mend.io faced, their diligent efforts paid off. They successfully identified CVEs with attack requirement details, arming security teams with precious insights for prioritizing vulnerabilities and fortifying defenses. This outcome was a significant achievement, because understanding the specific prerequisites for a vulnerability to be exploited is crucial in assessing risk and developing effective mitigation strategies. By using the power of Anthropic Claude, Mend.io was able to sift through tens of thousands of CVE reports, extracting the nuanced information about attack requirements that would have been nearly impossible to obtain through manual analysis. This feat not only saved valuable time and resources but also provided cybersecurity teams with a comprehensive view of the threat landscape, enabling them to make informed decisions and prioritize their efforts effectively.
Mend.io conducted an extensive evaluation of Anthropic Claude, issuing 68,378 requests without considering any quota limitations. Based on their initial experiment of analyzing a sample of 100 vulnerabilities to understand attack vectors, they could determine the accuracy of Claude’s direct YES or NO answers. As shown in the following table, Anthropic Claude demonstrated exceptional performance, providing direct YES or NO answers for 99.9883% of the requests. In the few instances where a straightforward answer was not given, Anthropic Claude still provided sufficient information to determine the appropriate response. This evaluation highlights Anthropic Claude’s robust capabilities in handling a wide range of queries with high accuracy and reliability.
Character count of the prompt (without CVE specific details)
13,935
Number of tokens for the prompt (without CVE specific details)
2,733
Total requests
68,378
Unexpected answers
8
Failures (quota limitations excluded)
0
Answer Quality Success Rate
99.9883%
Future plans
The successful application of Anthropic Claude in identifying attack requirement details from CVE data is just the beginning of the vast potential that generative AI holds for the cybersecurity domain. As these advanced models continue to evolve and mature, their capabilities will expand, opening up new frontiers in automating vulnerability analysis, threat detection, and incident response. One promising avenue is the use of generative AI for automating vulnerability categorization and prioritization. By using these models’ ability to analyze and comprehend technical descriptions, organizations can streamline the process of identifying and addressing the most critical vulnerabilities, making sure limited resources are allocated effectively. Furthermore, generative AI models can be trained to detect and flag potential malicious code signatures within software repositories or network traffic. This proactive approach can help cybersecurity teams stay ahead of emerging threats, enabling them to respond swiftly and mitigate risks before they can be exploited.
Beyond vulnerability management and threat detection, generative AI also holds promise in incident response and forensic analysis. These models can assist in parsing and making sense of vast amounts of log data, network traffic records, and other security-related information, accelerating the identification of root causes and enabling more effective remediation efforts. As generative AI continues to advance, its integration with other cutting-edge technologies, such as ML and data analytics, will unlock even more powerful applications in the cybersecurity domain. The ability to process and understand natural language data at scale, combined with the predictive power of ML algorithms, could revolutionize threat intelligence gathering, enabling organizations to anticipate and proactively defend against emerging cyber threats.
Conclusion
The field of cybersecurity is continually advancing, the integration of generative AI models like Anthropic Claude, powered by the robust infrastructure of Amazon Bedrock, represents a significant step forward in advancing digital defense. Mend.io’s successful application of this technology in extracting attack requirement details from CVE data is a testament to the transformative potential of language AI in the vulnerability management and threat analysis domains. By utilizing the power of these advanced models, Mend.io has demonstrated that the complex task of sifting through vast amounts of unstructured data can be tackled with precision and efficiency. This initiative not only empowers security teams with crucial insights for prioritizing vulnerabilities, but also paves the way for future innovations in automating vulnerability analysis, threat detection, and incident response. Anthropic and AWS have played a pivotal role in enabling organizations like Mend.io to take advantage of these cutting-edge technologies.
Looking ahead, the possibilities are truly exciting. As language models continue to evolve and integrate with other emerging technologies, such as ML and data analytics, the potential for revolutionizing threat intelligence gathering and proactive defense becomes increasingly tangible.
If you’re a cybersecurity professional looking to unlock the full potential of language AI in your organization, we encourage you to explore the capabilities of Amazon Bedrock and the Anthropic Claude models. By integrating these cutting-edge technologies into your security operations, you can streamline your vulnerability management processes, enhance threat detection, and bolster your overall cybersecurity posture. Take the first step today and discover how Mend.io’s success can inspire your own journey towards a more secure digital future.
About the Authors
Hemmy Yona is a Solutions Architect at Amazon Web Services based in Israel. With 20 years of experience in software development and group management, Hemmy is passionate about helping customers build innovative, scalable, and cost-effective solutions. Outside of work, you’ll find Hemmy enjoying sports and traveling with family.
Tzahi Mizrahi is a Solutions Architect at Amazon Web Services, specializing in container solutions with over 10 years of experience in development and DevOps lifecycle processes. His expertise includes designing scalable, container-based architectures and optimizing deployment workflows. In his free time, he enjoys music and plays the guitar.
Gili Nachum is a Principal solutions architect at AWS, specializing in Generative AI and Machine Learning. Gili is helping AWS customers build new foundation models, and to leverage LLMs to innovate in their business. In his spare time Gili enjoys family time and Calisthenics.
Maciej Mensfeld is a principal product architect at Mend, focusing on data acquisition, aggregation, and AI/LLM security research. He’s the creator of diffend.io (acquired by Mend) and Karafka. As a Software Architect, Security Researcher, and conference speaker, he teaches Ruby, Rails, and Kafka. Passionate about OSS, Maciej actively contributes to various projects, including Karafka, and is a member of the RubyGems security team.
Subscribe to our email newsletter to get the latest posts delivered right to your email.
Comments