This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

InterceptX MITRE Att&ck Evaluation Performance?

After not participating in MITRE Att&ck Evaluation rounds 1 and 2, Sophos did participate in round 3 but the results appear to be near the bottom of the participants.

I do not purport to be an expert on the MITRE Att&ck Evaluation process or its relevance to any specific customer base but I am curious what Sophos's response would be to customers or prospective clients if they were to suggest the results were indicative of the product quality.

I'm sure the question comes up. How does Sophos assess their performance in the evaluation?



This thread was automatically locked due to age.
Parents
  • I would have thought signal to noise ratio is everything here.

    I could create a product that recorded and alerted to every event.  I could then point at every event as being witnessed.  Does this create a product that is useful and provide actionable events, probably not.

    Just my 2 cents.

  • I am not suggesting that the MITRE Att&ck Evaluation results are a direct correlation to product effectiveness. I am suggesting that they appear to be becoming a defacto standard for assessing visibility into documented attack vectors. I would also suggest that in the long run alignment to organizations like MITRE might wind up being better benchmarks than the assessments of "independent" organizations like Gartner whose weaknesses are well known.

  • Just so, Patrick. This is why we decided to participate in the most recent round, even though we knew we couldn't configure the product for optimal "success" in the evaluation. As User930 points out, success is a bit hard to gauge anyway, since alerting on everything isn't necessarily a good thing. MITRE Engenuity itself recommends digging into the results to better understand what information was presented and how, rather than comparing based on percentage visibility/telemetry rates.

Reply
  • Just so, Patrick. This is why we decided to participate in the most recent round, even though we knew we couldn't configure the product for optimal "success" in the evaluation. As User930 points out, success is a bit hard to gauge anyway, since alerting on everything isn't necessarily a good thing. MITRE Engenuity itself recommends digging into the results to better understand what information was presented and how, rather than comparing based on percentage visibility/telemetry rates.

Children
No Data