Rule Update
24-006 (January 30, 2024)
Publish date: January 30, 2024
DESCRIPTION
* indicates a new version of an existing rule
Deep Packet Inspection Rules:
Trend Micro Mobile Security Server
1011957 - Trend Micro Mobile Security Server Cross-Site Scripting Vulnerability (CVE-2023-41176)
VoIP Smart
1009953* - Digium Asterisk PJSIP In-Dialog MESSAGE Request Denial-of-Service (CVE-2019-12827)
Web Application Tomcat
1011958 - Fortra GoAnywhere MFT Authentication Bypass Vulnerability (CVE-2024-0204)
Web Server HTTPS
1011959 - Trend Micro Apex Central Cross-Site Scripting Vulnerability (CVE-2023-52329)
Web Server Miscellaneous
1011956 - GitLab Privilege Escalation Vulnerability (CVE-2023-7028)
1011948 - Ivanti Avalanche Remote Code Execution Vulnerability (CVE-2023-46263)
Integrity Monitoring Rules:
There are no new or updated Integrity Monitoring Rules in this Security Update.
Log Inspection Rules:
There are no new or updated Log Inspection Rules in this Security Update.
Deep Packet Inspection Rules:
Trend Micro Mobile Security Server
1011957 - Trend Micro Mobile Security Server Cross-Site Scripting Vulnerability (CVE-2023-41176)
VoIP Smart
1009953* - Digium Asterisk PJSIP In-Dialog MESSAGE Request Denial-of-Service (CVE-2019-12827)
Web Application Tomcat
1011958 - Fortra GoAnywhere MFT Authentication Bypass Vulnerability (CVE-2024-0204)
Web Server HTTPS
1011959 - Trend Micro Apex Central Cross-Site Scripting Vulnerability (CVE-2023-52329)
Web Server Miscellaneous
1011956 - GitLab Privilege Escalation Vulnerability (CVE-2023-7028)
1011948 - Ivanti Avalanche Remote Code Execution Vulnerability (CVE-2023-46263)
Integrity Monitoring Rules:
There are no new or updated Integrity Monitoring Rules in this Security Update.
Log Inspection Rules:
There are no new or updated Log Inspection Rules in this Security Update.
Featured Stories
- Unveiling AI Agent Vulnerabilities Part V: Securing LLM ServicesTo conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.Read more
- Unveiling AI Agent Vulnerabilities Part IV: Database Access VulnerabilitiesHow can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.Read more
- The Mirage of AI Programming: Hallucinations and Code IntegrityThe adoption of large language models (LLMs) and Generative Pre-trained Transformers (GPTs), such as ChatGPT, by leading firms like Microsoft, Nuance, Mix and Google CCAI Insights, drives the industry towards a series of transformative changes. As the use of these new technologies becomes prevalent, it is important to understand their key behavior, advantages, and the risks they present.Read more
- Open RAN: Attack of the xAppsThis article discusses two O-RAN vulnerabilities that attackers can exploit. One vulnerability stems from insufficient access control, and the other arises from faulty message handlingRead more