When SecureSky onboards a new customer, one of the first things our team does is a thorough review of the Sentinel environment - what’s ingesting, what’s alerting, and how it’s all configured.
We see a lot of environments. And while every organization is different, certain patterns repeat themselves with uncomfortable consistency. These aren’t edge cases. They’re the default state of most Sentinel deployments that haven’t had dedicated, experienced engineering attention.
Here are the three we see most often - and why each one matters.
We frequently walk into environments where log sources are ingesting at full volume with no tuning to the organization’s actual audit policies. Every event, every action, every system - streaming into Sentinel because someone followed the documentation and enabled everything that was available.
The inverse problem is just as common and harder to spot: blindly following default audit settings that leave critical detail out of the logs entirely. Windows Event ID 4688 (process execution) is a useful example. The default audit policy logs the event, but does not capture the command line used to execute the process. That command line is often the most important piece of forensic context available during an investigation or a threat hunt. Without specifically enabling command line logging in the audit policy, you have a log entry that tells you a process ran, but not what it was told to do. You’re ingesting the event and missing the evidence.
Too much data and blind spots in data are two sides of the same problem: log ingestion that wasn’t designed, it was just turned on.
The result is a bloated data footprint paired with invisible gaps: high ingestion costs, slow query performance, a signal-to-noise ratio that makes meaningful analysis almost impossible, and specific forensic details that simply aren’t there when you need them.
Good log ingestion isn’t about volume. It’s about curation. Each data source should be scoped to the events your organization actually needs to retrace activity accurately -and nothing more. That requires a deliberate review of what each source produces, what’s meaningful in your environment, and what’s just noise that will inflate your Azure bill without improving your security posture.
More data doesn’t mean more security. Often it means less.
We see environments where the full Microsoft rule library has been enabled wholesale. The thinking is intuitive: more rules means more coverage. In practice, it means something different.
Every rule that fires generates an incident. If that rule isn’t calibrated to the environment - if it triggers on activity that is completely normal for that organization - the result isn’t better security. It’s alert fatigue. Analysts get buried. Response slows. Real incidents get lost in the noise.
Every analytics rule needs to be evaluated against the specific environment it’s protecting. When our engineers review a new deployment, we’re asking the same question for every rule: should this ever fire in this environment? If the honest answer is yes but it fires too broadly, the rule needs tuning. If the answer is no, the rule should be disabled until conditions justify it.
A well-tuned Sentinel environment with 40 precisely calibrated rules will outperform an out-of-the-box deployment with 400 rules running blind every time.
A SIEM is a Security Information and Event Management platform. That name is descriptive on purpose. Sentinel’s job is to aggregate security data, correlate it across sources, generate incidents for analysts to triage, and provide the workflow and automation infrastructure to act on those incidents. It does that job extraordinarily well.
What it is not designed to do is serve as a primary detection engine for every threat type in your environment. And yet we regularly see organizations trying to use it that way, asking Sentinel to do work that belongs in purpose-built tooling.
The examples are consistent. We see organizations ingesting raw web server logs and writing SQL injection detection rules directly in Sentinel, when that work belongs in a Web Application Firewall that was specifically designed to parse and detect application-layer attacks. We see database audit logs being fed into Sentinel for attack detection, when a database activity monitoring solution sitting closer to the data source would catch those patterns with far greater accuracy and context. We see teams trying to identify malware through raw OS event logs, when an endpoint detection and response solution is purpose-built to identify suspicious process behavior, file activity, and lateral movement at the host level.
In each case, the logic is the same: the organization either doesn’t have the right upstream tool or doesn’t trust it, so Sentinel becomes the catch-all. We understand the impulse. But it creates compounding problems - the detection logic is harder to write, more expensive to run, and fundamentally less accurate than tooling designed for that specific job. Beyond the initial build, there’s a maintenance reality that rarely gets factored in: attack techniques evolve, and detection logic that worked last year may miss a variant this year. Keeping custom SIEM rules current against an ever-shifting threat landscape is a near-impossible ongoing commitment - and purpose-built tools like EDRs and WAFs have dedicated engineering teams doing exactly that work on your behalf, continuously, as part of what you’re already paying for.
The concept we apply here is simple: aces in their places. Your EDR solution detects endpoint-level threats. Your WAF detects application-layer attacks. Your database monitoring catches data-tier anomalies. Your SIEM correlates across all of them, generates actionable incidents, and drives the response workflow. Each tool does what it was designed to do.
When organizations collapse those boundaries (usually out of budget constraints or a reluctance to add another tool) they end up with a SIEM that’s doing a poor job of something it wasn’t built for, while introducing processing inefficiencies, poor-quality analytics, unreliable incidents, and coverage gaps that are hard to see because the SIEM appears to be “covering” that layer.
When we encounter this pattern and explain it, customer reactions tend to fall into two camps: those who accept the explanation and work toward the right architecture, and those who say “I hear you, but we don’t have a WAF” (or a proper EDR solution, or a database monitoring tool, or [fill in the blank]). Both are honest responses. The difference is in understanding the risk you’re accepting when you ask a tool to work outside its design envelope, and being clear-eyed about the gaps that introduces.
These three problems share something important: none of them are signs of negligence. They’re the predictable result of standing up a sophisticated platform without the engineering depth to configure it correctly from day one.
Microsoft’s documentation is thorough. But documentation tells you how to enable a feature, not whether you should, or how it applies to your specific organization.
That’s the gap. And it’s the gap SecureSky fills — not just at onboarding, but on an ongoing basis as your environment evolves.
In Part 2 of this series, we go deeper: what a Sentinel environment looks like when it’s only operating at half its potential capacity, why that happens, and what the path forward looks like.
SecureSky is a Microsoft-recognized MXDR provider specializing in managed detection and response and Microsoft Sentinel. Contact us to talk through your current deployment.