Rule Update
24-009 (February 20, 2024)
Publish date: February 20, 2024
DESCRIPTION
* indicates a new version of an existing rule
Deep Packet Inspection Rules:
Ivanti Avalanche
1011863* - Ivanti Avalanche Authentication Bypass Vulnerability (CVE-2021-22962 & CVE-2023-32566)
Jenkins Remoting
1011976 - Jenkins Arbitrary File Read Vulnerability Over WebSocket (CVE-2024-23897)
Web Application PHP Based
1011974 - GLPI SQL Injection Vulnerability (CVE-2023-46727)
Web Server HTTPS
1011917* - Adobe RoboHelp Server Information Disclosure Vulnerability (CVE-2023-22272)
Web Server Miscellaneous
1011971 - Paessler PRTG Network Monitor Remote Code Execution Vulnerability (CVE-2023-32781)
Integrity Monitoring Rules:
There are no new or updated Integrity Monitoring Rules in this Security Update.
Log Inspection Rules:
There are no new or updated Log Inspection Rules in this Security Update.
Deep Packet Inspection Rules:
Ivanti Avalanche
1011863* - Ivanti Avalanche Authentication Bypass Vulnerability (CVE-2021-22962 & CVE-2023-32566)
Jenkins Remoting
1011976 - Jenkins Arbitrary File Read Vulnerability Over WebSocket (CVE-2024-23897)
Web Application PHP Based
1011974 - GLPI SQL Injection Vulnerability (CVE-2023-46727)
Web Server HTTPS
1011917* - Adobe RoboHelp Server Information Disclosure Vulnerability (CVE-2023-22272)
Web Server Miscellaneous
1011971 - Paessler PRTG Network Monitor Remote Code Execution Vulnerability (CVE-2023-32781)
Integrity Monitoring Rules:
There are no new or updated Integrity Monitoring Rules in this Security Update.
Log Inspection Rules:
There are no new or updated Log Inspection Rules in this Security Update.
Featured Stories
- Unveiling AI Agent Vulnerabilities Part V: Securing LLM ServicesTo conclude our series on agentic AI, this article examines emerging vulnerabilities that threaten AI agents, focusing on providing proactive security recommendations on areas such as code execution, data exfiltration, and database access.Read more
- Unveiling AI Agent Vulnerabilities Part IV: Database Access VulnerabilitiesHow can attackers exploit weaknesses in database-enabled AI agents? This research explores how SQL generation vulnerabilities, stored prompt injection, and vector store poisoning can be weaponized by attackers for fraudulent activities.Read more
- The Mirage of AI Programming: Hallucinations and Code IntegrityThe adoption of large language models (LLMs) and Generative Pre-trained Transformers (GPTs), such as ChatGPT, by leading firms like Microsoft, Nuance, Mix and Google CCAI Insights, drives the industry towards a series of transformative changes. As the use of these new technologies becomes prevalent, it is important to understand their key behavior, advantages, and the risks they present.Read more
- Open RAN: Attack of the xAppsThis article discusses two O-RAN vulnerabilities that attackers can exploit. One vulnerability stems from insufficient access control, and the other arises from faulty message handlingRead more