This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

InterceptX MITRE Att&ck Evaluation Performance?

After not participating in MITRE Att&ck Evaluation rounds 1 and 2, Sophos did participate in round 3 but the results appear to be near the bottom of the participants.

I do not purport to be an expert on the MITRE Att&ck Evaluation process or its relevance to any specific customer base but I am curious what Sophos's response would be to customers or prospective clients if they were to suggest the results were indicative of the product quality.

I'm sure the question comes up. How does Sophos assess their performance in the evaluation?



This thread was automatically locked due to age.
  • Hi ,

    Allow us to have this check and get back with you. 

    Glenn ArchieSeñas (GlennSen)
    Global Community Support Engineer

    The New Home of Sophos Support Videos!  Visit Sophos Techvids
  • Do you have any links to this? I would like to take a look at it.


  • If a post solves your question use the 'Verify Answer' button.

    Ryzen 5600U + I226-V (KVM) v21 GA @ Home

    Sophos ZTNA (KVM) @ Home

  • Hi Patrick,

    As you say, this was our first time participating in the ATT&CK Evaluation. Even though our product wasn't really optimized for this form of testing, we still demonstrated an ability to disrupt, detect, and provide visibility into large portions of the attack chain. In other words, a Sophos Intercept X w/EDR customer in a real-world situation would have been protected and would have been able to use the product to investigate what was happening.

    We learned a lot from this process, including areas to improve the product's real-world capabilities (many of which are already implemented) and things we need to do to make the product work better for future rounds of the evaluation. We're proud to have participated, and we look forward to doing so again in the future.

    Regards,
    Maxim

  • Well said Patrick. I much rather see "most" items protected "well" over "all items" protected "somewhat". Knowing what areas the product(s) are weaker in only shows where the opportunity to improve is. 

  • Maxim,

    Thanks for the background. As a customer, I have not had any complaints about our Sophos protection but during annual renewal our CTO requests that we conduct vendor review to make sure we remain up to date with current market offerings. One of these latest things is MITRE Att&ck Evaluation results which honestly seems like a perfectly valid baseline given MITRE's independent position.

    At the end of the day it is about protection results, but as a customer (or prospective customer) I need Sophos to track closely with these emerging de facto baselines which other competitors/leaders in the space already do.

    It seems you agree about the reasonableness of these expectations and are taking steps to meet them.

  • I would have thought signal to noise ratio is everything here.

    I could create a product that recorded and alerted to every event.  I could then point at every event as being witnessed.  Does this create a product that is useful and provide actionable events, probably not.

    Just my 2 cents.

  • I am not suggesting that the MITRE Att&ck Evaluation results are a direct correlation to product effectiveness. I am suggesting that they appear to be becoming a defacto standard for assessing visibility into documented attack vectors. I would also suggest that in the long run alignment to organizations like MITRE might wind up being better benchmarks than the assessments of "independent" organizations like Gartner whose weaknesses are well known.

  • Just so, Patrick. This is why we decided to participate in the most recent round, even though we knew we couldn't configure the product for optimal "success" in the evaluation. As User930 points out, success is a bit hard to gauge anyway, since alerting on everything isn't necessarily a good thing. MITRE Engenuity itself recommends digging into the results to better understand what information was presented and how, rather than comparing based on percentage visibility/telemetry rates.