This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

How to download logs for Logging & Reporting > Network Usage?

Hello

I have noticed an anomaly spike in network usage for one particular day last week in the Network Usage graph (under Logging & Reporting > Network Usage). I am interested examining this graph but unsure where I can find the actual log files.

I have checked Logging & Reporting > View Log Files > Archived Log Files but not sure which actual log file is representing the graph for Network Usage.

What would I be looking for to see the logs for these graphs?

Thank you




This thread was automatically locked due to age.
  • Hi Ian,

     

    I don't think that there are actual log files regarding traffic, as these are not events.

    However, you can get more information about bandwidth usage in Logging&Reporting>Network Usage > Bandwidth Usage.

    There you've got several options to filter on Clients/service/applications etc within the time frame desired.

     

    Hope this helps.

     

    Regards,

     

    Karl-Heinz

  • Hi Ian and a belated welcome to the UTM Community!

    As Karl-Heinz said, there are no log files.  The suggestions he made are ways to get information from the PostgreSQL data bases that are used to store this information.  If you're handy with PostgreSQL, you can query these data bases yourself.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • For one-time traffic spikes, I would start by looking at the automatic update processes - Windows Update, Apple Software Update, Adobe updates, Java updates, Antivirus updates, etc.  These will almost always operate on http or https.   We will hope that the spike is caused by one of these and not by bad guys exfiltrating your data.

    If you have a virtual desktop environment in which PCs revert to a reference image daily, these automated updates need to be completely suppressed to prevent them from being reapplied every day.  We have been burned more than once by these processes.

    Nearly all of your Internet traffic will use https, so the web filtering log is the place to start looking.  Fortunately, the built-in reporting is good enough to give you some valuable clues, or possibly a complete answer.   However, some of your traffic exceptions will also disable logging, which creates a hole in your data.   You may want to review the exception definitions to see if you want to enable more extensive logging for next time.

    Web filtering also has the benefit of tracking data usage by user.   Your biggest user may be the guy who listens to a video feed of his favorite news channel all day.   You would need a lot of additional news-followers to create a spike, so you are more likely to see a user-created blip if there is a big news story that has every glued to a news feed.

    To get a complete view of network usage, you would have to sum the bandwidth data from all of the module logs, since each packet flows through one and only one module.   I think all of the log files contain UTM-standard log entries which include a size field indicating the number of bytes handled by that log entry.   If you have the right parsing tools, you can extract the source, destination, and size, then sum across any desired time period.  HTTPS without inspection writes one log entry at the end of a session, which summarizes the entire session.   Because users may have long-running HTTPS sessions, you will lose some precision if you try to compute aggregate traffic over a short time period.   Start by looking at traffic volumes over full days or longer, then narrow it down once you know what you need to find.

    Several of the log files contain multiple record formats, with some entries appearing in the UTM-standard format and other entries in whatever format is generated by the embedded open-source product.   Additionally, any log entry can be split onto a continuation line if it is too long.   Consequently, you need log parsing logic that can separate the UTM-standard entries from the ones that do not matter for this purpose.  I don't think the size field ever flows onto a continuation, line, so you may be able to ignore that complexity for purposes of this problem.

    Hope this helps.