OpenSSL SSL_get_shared_ciphers Function Buffer Overflow
Publish date: July 21, 2015
Severity: CRITICAL
CVE Identifier: CVE-2006-3738
Advisory Date: JUL 21, 2015
DESCRIPTION
Buffer overflow in the SSL_get_shared_ciphers function in OpenSSL has unspecified impact and remote attack vectors involving a long list of ciphers.
Note: The detection logic for a DPI rule is same for both vulnerabilities CVE-2006-3738 & CVE-2007-5135 due to same vulnerable condition. Hence you may see over recommendation of this vulnerability when there is no patch detected for vulnerability CVE-2007-5135.
TREND MICRO PROTECTION INFORMATION
Failed exploit attempts may crash applications, denying service to legitimate users.
SOLUTION
Trend Micro Deep Security DPI Rule Number: 1000826
Trend Micro Deep Security DPI Rule Name: 1000826 - OpenSSL SSL_get_shared_ciphers Function Buffer Overflow
AFFECTED SOFTWARE AND VERSION
- OpenSSL Project OpenSSL 0.9.7
- OpenSSL Project OpenSSL 0.9.7a
- OpenSSL Project OpenSSL 0.9.7b
- OpenSSL Project OpenSSL 0.9.7c
- OpenSSL Project OpenSSL 0.9.7d
- OpenSSL Project OpenSSL 0.9.7e
- OpenSSL Project OpenSSL 0.9.7f
- OpenSSL Project OpenSSL 0.9.7g
- OpenSSL Project OpenSSL 0.9.7h
- OpenSSL Project OpenSSL 0.9.7i
- OpenSSL Project OpenSSL 0.9.7j
- OpenSSL Project OpenSSL 0.9.7k
- OpenSSL Project OpenSSL 0.9.8
- OpenSSL Project OpenSSL 0.9.8a
- OpenSSL Project OpenSSL 0.9.8b
- OpenSSL Project OpenSSL 0.9.8c
Featured Stories
- Unveiling AI Agent Vulnerabilities Part V: Securing LLM ServicesTo conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.Read more
- Unveiling AI Agent Vulnerabilities Part IV: Database Access VulnerabilitiesHow can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.Read more
- The Mirage of AI Programming: Hallucinations and Code IntegrityThe adoption of large language models (LLMs) and Generative Pre-trained Transformers (GPTs), such as ChatGPT, by leading firms like Microsoft, Nuance, Mix and Google CCAI Insights, drives the industry towards a series of transformative changes. As the use of these new technologies becomes prevalent, it is important to understand their key behavior, advantages, and the risks they present.Read more
- Open RAN: Attack of the xAppsThis article discusses two O-RAN vulnerabilities that attackers can exploit. One vulnerability stems from insufficient access control, and the other arises from faulty message handlingRead more