This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Shadow IT

Hi,

I am conducting a Shadow IT audit within my organisation.  One of  the data sources I am going to use are logs ( which logs is not determined yet) from our deployed UTM proxy server.  The network guys have given me an example of a log to assess whilst they prepare a log dump for me.  What i want to do, and I lack the knowledge to ask pertinent and precise questions, is to review the proxy logs and determine two things

  1. the url accessed by the staff member ( this is captured in the example provided to me by my network guys)
  2. if the staff member logged into / authenticated onto the website when they accessed it.  Basically I would like to understand and define the criteria to filter these logs to only show instances where a staff member logged into a website and then disregard all instances were staff members only browsed a website.

I am unsure if that activity could be caught by the proxy logs or if it would be caught by a proxy log, which log or logs would capture it.

I realise this is very vague and I will provide more information if required, but as a hypothetical  question - could such an action - a user accessing a website and then logging into that website  -be captured by the logs created by a UTM Proxy server. 

Thanks in advance.



This thread was automatically locked due to age.
  • Hi and welcome to the UTM Community!

    I can't think of an easy way to do this.  You could certainly list all of the log lines that have the word login in them somewhere, but in an organization of any size, that will yield thousands of lines.  In many cases, there won't be any FQDN listed, just the path although the referrer field might show the domain being accessed.  Even limiting the lines to those containing url="https://login would leave an insurmountable task of identification.  Even then, I don't see how you could identify the logoff without parsing the log manually.  Using the logs alone, it would only be practical if you were targeting a specific individual or a specific FQDN that you knew to require a manual login.

    You would have more luck with Reporting, but that would require you to have access to WebAdmin for the UTM as a "Web Protection Auditor" and some facility with using Web Protection Reporting.  If you were also a Network Protection Auditor, you could learn from that database

    If you have someone gifted in PostgreSQL, you could have that person learn the structure of the databases and create reports to gather the information you seek.

    Net Net - you've imagined an expensive undertaking.

    Cheers - Bob

     
    Sophos UTM Community Moderator
    Sophos Certified Architect - UTM
    Sophos Certified Engineer - XG
    Gold Solution Partner since 2005
    MediaSoft, Inc. USA
  • The vast majority of business websites now use HTTPS, and essentially 100% of business service sites will use https for a login session.   With decrypt-and-scan disabled, only the FQDN is visible.    So as a first step, your will need to be to deploy https inspection.   Then the logs might have the detail you would require to have any hope of distinguishing "login sessions" from "browsing sessions".  

  • Hi Balfson, 

     

    I just wanted to say thanks for your response. It was pretty useful and we have started to go down that route using key words that may indicate that the url included login activity.

    We created scripts in ACL / AX server to import the logs files, extract urls, create timestamps and then filter using key words ( such as login, logon, app, auth etc).  Whilst there is a huge amount of source data the output file; once we run the script; is pretty manageable and even more so when we collate based on instance occurrence for unique urls. 

    It has allowed us to take millions of log entries and process them in an hour or two to create an output that can then be further filtered to identify items of interest. A key element in gaining traction for an implemented and proactive IT Shadow initiative within an organisation is the ability to quantify the usage and to then risk assess it.  Our goal is not to name and shame users that have used such services, but rather to identify the need to create a process that can take those useful systems ( we are doing surveys with business users as well) and fold them into the IT sec software review process so they can become whitelisted and potentially included in an inventory of acceptable services. Increased instances of shadow IT often indicate that IT is not providing services that the business use / need and once those services have been subject to assessment from both a security and gdpr perspective they are likely to be managed centrally and securely (fingers crossed).

    we are also running activity reports against specific items of interest to quantify trends of access over periods of time - again this then allows us to map that to other external data such as projects or events captured on other security tools.  

    Anyway I just wanted to reply in case any other IT auditors stumble upon this post in a similar situation that I found myself in.  

     

    G