talpa-deny errors preventing backups on ubuntu 20.04

I run sophos free SAV: 9.16.2, Engine: 3.79.0, Data: 5.77 , with talpa . Sophos Anti-Virus is active and on-access scanning is running

I use 7z to backup my files on ubuntu 20.04. Some of these files are from wine, and get talpa-deny errors in the system log, causing the backup to report timeouts, lock up and then have to be stopped.  However, I don't get virus alerts for these talpa-deny events.

An example from the syslog (my directory being replaced by .....) is:

kernel: [ 5994.994356] talpa-deny: Timeout occurred while opening ......../.wine/drive_c/windows/syswow64/mmdevldr.vxd on behalf of process 7z[7429/7429] owned by 1000(1000)/1000(1000) <62>

How can I
a) detect this better than just getting a syslog error
b) prevent it from happening

  • Hello pastim,

    it seems to prevent
    IMO neither deliberately nor negligently even though it is reproducible.

    The actual culprit is complexity and as said perfectly legitimate actions can nevertheless result in conflicts you can handle only by "graceful errors". I'll try to give a hopefully simple (and simplified) example that things can inherently go wrong.

    Sparse files: You write an application that maintains a table of 1k blocks. You request, say, 1k of these blocks. The file system has enough space available you get the space. For whatever reason you start with the last block. The underlying routines write one 1k block and when you close the file only one block is physically allocated. If you open the file and read it with "high-level" calls sequentially nothing is actually read from disk until you get to the last block. For all others you get a buffer filled with whatever is considered as empty. For your application the file seems to be 1M in size. Now you start writing more blocks within those 1M but suddenly you get an out of space. Huh? Your application might not be prepared for this - you had already 1M and you just used space from these 1M. Only the physical file system no longer had this space. This is a risk in the concept of sparse files. The why use sparse files? Tables indexed with hashes are mostly empty. If you really allocate them you'd waste lots of space. Now you could add logic that keeps the table dense. This adds extra processing because the file system must perform a similar task to find physical space. This is a compromise that works most of the time but not always.
    What actually happens in a file system is much more complex and there is more than one way to do certain things. Under certain circumstances conflicts arise that are not really someone's fault. I haven't mentioned the complexity process scheduling, asynchronous operation and all that there is. There are always situations you can only work around, combinations of legitimate actions that conflict.


  • Hmm.  Well I find all that a little hard to believe. In all my 47 years using and programming computers I've not encountered similar issues except with sophos and talpa. All I am doing is copying a file from one place to another. 

    The only files that seem to have problems are .wine files.  I wonder whether sophos is getting tangled up with what appear to be windows files, but are actually on a linux system.

    You may, of course be perfectly correct, but it's pretty weird nonetheless.

  • Hello pastim,

    you must have been lucky [;)]. A little less than 47 years and it depends on what you call similar but I can't say I haven't encountered "unfortunate interactions".

    All I am doing is copying a file [...] problems [with] .wine files
    using the Linux port of 7z
    ? Or also when using the coreutils cp


  • With 7z.

    Given that Sophos in their wisdom (?) may stop support before 2023 (it's not 100% clear) I'm now contemplating whether to go back to clamav (which I used several years ago), of pay for something like eset.