This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

--args-file question

Hello

I would know if there limits using --args-file ? i.e. may I use a file with 1 million rows of /home locations ?

Thank you



This thread was automatically locked due to age.
Parents
  • Hi  

    Would you please provide more specification when you say 1 million rows of /home location? What are you pointing towards files or the file which is having 1 million rows of text?

    Regards,

    Jasmin
    Community Support Engineer | Sophos Support

    Sophos Support VideosKnowledge Base  |  @SophosSupport | Sign up for SMS Alerts |
    If a post solves your question use the 'This helped me' link

  • Hello

    in /home/user/public_html

    I have about 5 hundred users . Instead to execute a

    savscan /home/*/public_html

    I wrote a php file which creates a file /tmp/myfile (i.e.) with a list of files modified in latest 24 hours in  /home/*/public_html
    and I want check this file using args-file=myfile with the -f and -archive option for each file in the list

    savscan --args-file=/tmp/myfile

    Since there are various users using webmail or other dynamic script which adds constantly new files , the
    generated file /tmp/myfile is huge.

    I tested this way using only a part of these users , and I noticed that if the number of files contained in the file grows also
    savscan execution seems to be more slower . Is it so ? More large is the file and more slow run savscan --args-file=myfile ?
    It seems to became very very slow over 25000 files , so I am considering to use linux split .


    Thank you

Reply
  • Hello

    in /home/user/public_html

    I have about 5 hundred users . Instead to execute a

    savscan /home/*/public_html

    I wrote a php file which creates a file /tmp/myfile (i.e.) with a list of files modified in latest 24 hours in  /home/*/public_html
    and I want check this file using args-file=myfile with the -f and -archive option for each file in the list

    savscan --args-file=/tmp/myfile

    Since there are various users using webmail or other dynamic script which adds constantly new files , the
    generated file /tmp/myfile is huge.

    I tested this way using only a part of these users , and I noticed that if the number of files contained in the file grows also
    savscan execution seems to be more slower . Is it so ? More large is the file and more slow run savscan --args-file=myfile ?
    It seems to became very very slow over 25000 files , so I am considering to use linux split .


    Thank you

Children