MIT Kerberos 5 KAdminD Server SVCAuth_GSS_Validate Stack Buffer Overflow Vulnerability
Publish date: February 15, 2011
Severity: CRITICAL
CVE Identifier: CVE-2007-4743
Advisory Date: FEB 15, 2011
DESCRIPTION
The original patch for CVE-2007-3999 in svc_auth_gss.c in the RPCSEC_GSS RPC library in MIT Kerberos 5 (krb5) 1.4 through 1.6.2, as used by the Kerberos administration daemon (kadmind) and other applications that use krb5, does not correctly check the buffer length in some environments and architectures, which might allow remote attackers to conduct a buffer overflow attack.
TREND MICRO PROTECTION INFORMATION
Trend Micro Deep Security shields networks through Deep Packet Inspection (DPI) rules. Trend Micro customers using OfficeScan with Intrusion Defense Firewall (IDF) plugin are also protected from attacks using these vulnerabilities. Please refer to the filter number and filter name when applying appropriate DPI and/or IDF rules.
SOLUTION
Trend Micro Deep Security DPI Rule Number: 1001089
Trend Micro Deep Security DPI Rule Name: 1001089 - MIT Kerberos kadmind RPC Library RPCSEC_GSS Authentication Buffer Overflow
AFFECTED SOFTWARE AND VERSION
- MIT Kerberos 5 1.4
- MIT Kerberos 5 1.4.1
- MIT Kerberos 5 1.4.2
- MIT Kerberos 5 1.4.3
- MIT Kerberos 5 1.4.4
- MIT Kerberos 5 1.5
- MIT Kerberos 5 1.5.1
- MIT Kerberos 5 1.5.2
- MIT Kerberos 5 1.5.3
- MIT Kerberos 5 1.6
- MIT Kerberos 5 1.6.1
- MIT Kerberos 5 1.6.2
Featured Stories
- Unveiling AI Agent Vulnerabilities Part V: Securing LLM ServicesTo conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.Read more
- Unveiling AI Agent Vulnerabilities Part IV: Database Access VulnerabilitiesHow can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.Read more
- The Mirage of AI Programming: Hallucinations and Code IntegrityThe adoption of large language models (LLMs) and Generative Pre-trained Transformers (GPTs), such as ChatGPT, by leading firms like Microsoft, Nuance, Mix and Google CCAI Insights, drives the industry towards a series of transformative changes. As the use of these new technologies becomes prevalent, it is important to understand their key behavior, advantages, and the risks they present.Read more
- Open RAN: Attack of the xAppsThis article discusses two O-RAN vulnerabilities that attackers can exploit. One vulnerability stems from insufficient access control, and the other arises from faulty message handlingRead more