IBM Lotus Notes Lotus 1-2-3 Work Sheet File Viewer Buffer Overflows
Publish date: February 15, 2011
Severity: CRITICAL
CVE Identifier: CVE-2007-5909
Advisory Date: FEB 15, 2011
DESCRIPTION
Multiple stack-based buffer overflows in Autonomy (formerly Verity) KeyView Viewer, Filter, and Export SDK before 9.2.0.12, as used by ActivePDF DocConverter, IBM Lotus Notes before 7.0.3, Symantec Mail Security, and other products, allow remote attackers to execute arbitrary code via a crafted (1) AG file to kpagrdr.dll, (2) AW file to awsr.dll, (3) DLL or (4) EXE file to exesr.dll, (5) DOC file to mwsr.dll, (6) MIF file to mifsr.dll, (7) SAM file to lasr.dll, or (8) RTF file to rtfsr.dll. NOTE: the WPD (wp6sr.dll) vector is covered by CVE-2007-5910.
TREND MICRO PROTECTION INFORMATION
Trend Micro Deep Security shields networks through Deep Packet Inspection (DPI) rules. Trend Micro customers using OfficeScan with Intrusion Defense Firewall (IDF) plugin are also protected from attacks using these vulnerabilities. Please refer to the filter number and filter name when applying appropriate DPI and/or IDF rules.
SOLUTION
Trend Micro Deep Security DPI Rule Number: 1001206
Trend Micro Deep Security DPI Rule Name: 1001206 - IBM Lotus Notes Lotus 1-2-3 Work Sheet File Viewer Buffer Overflows
AFFECTED SOFTWARE AND VERSION
- Autonomy KeyView Export SDK 9.2.0
- Autonomy KeyView Filter SDK 9.2.0
- Autonomy KeyView Viewer SDK 9.2.0
- IBM Lotus Notes 7.0.2
- Symantec Mail Security 5.0
- Symantec Mail Security 5.0.0
- Symantec Mail Security 5.0.0.24
- Symantec Mail Security 5.0.1
- Symantec Mail Security 7.5
- activepdf docconverter 3.8.2 .5
Featured Stories
- Unveiling AI Agent Vulnerabilities Part V: Securing LLM ServicesTo conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.Read more
- Unveiling AI Agent Vulnerabilities Part IV: Database Access VulnerabilitiesHow can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.Read more
- The Mirage of AI Programming: Hallucinations and Code IntegrityThe adoption of large language models (LLMs) and Generative Pre-trained Transformers (GPTs), such as ChatGPT, by leading firms like Microsoft, Nuance, Mix and Google CCAI Insights, drives the industry towards a series of transformative changes. As the use of these new technologies becomes prevalent, it is important to understand their key behavior, advantages, and the risks they present.Read more
- Open RAN: Attack of the xAppsThis article discusses two O-RAN vulnerabilities that attackers can exploit. One vulnerability stems from insufficient access control, and the other arises from faulty message handlingRead more