This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

High httpproxy memory usage with 9.713 and 9.714

Hello,

I have a pair of virtual UTMs which have run for years with about 4G of RAM allocated to them. After the upgrade from 9.712 to 9.713 in Nov I noticed my swap usage climbing beyond its normal 10-15% level. The culprit was the httpproxy proces so I added about 1/2G to the VMs which returned the swap usage to about 15%. This past week I updated to 9.714 and observed the httpproxy process growing much larger driving swap usage into the 60% range.

The two systems run with almost identical configurations which change very little over time. Our usage patterns have not changed much either. I have not noticed anything in the release notes suggesting a significant change that should require more memory, so my suspicion at this point is that the httpproxy process has a memory leak.

The graph below shows 9.714 after a restart last week. Here is the current httpproxy memory/swap usage:

  PID USER      PR  NI  VIRT  RES  SHR S   %CPU %MEM    TIME+  SWAP COMMAND                                                                                                      
 4776 httpprox  20   0 6202m 1.6g 3996 S      1 38.4  46:09.01 4.4g httpproxy
                                                                                                     

--Larry



This thread was automatically locked due to age.
Parents
  • I'm still on 9.710. After ~ 13.5 days, here's my stats

    UTM vm has 8GB of ram assigned.  Seems about inline with your 4.5GB assignment; 64bit mode enabled as well.

    I bumped the ram up to 16GB (from 8GB), so we'll see what it looks like after a week or two of usage.

  • As Amodin notes, 9.710 vs. 9.713 or 9.714. Mine have been running with 4G allocated for years now but once I went from 9.712 to 9.713 the httpproxy process began to grow driving swap usage up. I didn't mind the small (10-15%) but consistent swap usage as the throughput was adequate but what I was seeing was swap usage continuing to grow. Since I didn't see anyone else mentioning this, I added a 1/2G of memory and memory usage and swap seemed to level out. 

    Once I updated from 9.713 to 9.714 and I saw the swap climbing to 60% I figured it was time to raise a red flag as this is not workable if the stated system requirement is 2GB. Either the minimum requirement needs to be raised or the leak plugged. ;-}

    Jay Jay it would be interesting to see the SWAP column added to your output (f/F P in top)...note that my httpproxy process is up to 4.4G of swap used.

    --Larry

Reply
  • As Amodin notes, 9.710 vs. 9.713 or 9.714. Mine have been running with 4G allocated for years now but once I went from 9.712 to 9.713 the httpproxy process began to grow driving swap usage up. I didn't mind the small (10-15%) but consistent swap usage as the throughput was adequate but what I was seeing was swap usage continuing to grow. Since I didn't see anyone else mentioning this, I added a 1/2G of memory and memory usage and swap seemed to level out. 

    Once I updated from 9.713 to 9.714 and I saw the swap climbing to 60% I figured it was time to raise a red flag as this is not workable if the stated system requirement is 2GB. Either the minimum requirement needs to be raised or the leak plugged. ;-}

    Jay Jay it would be interesting to see the SWAP column added to your output (f/F P in top)...note that my httpproxy process is up to 4.4G of swap used.

    --Larry

Children
  •   

    I'll keep that in mind. Interestingly, looking at it right now, it's showing swap for some processes, but at the top showing 0M used of swap.  Something not adding up. This is with 16GB allocated to the VM.  I'd expect it to use no swap at all.

    It's not a big deal to upgrade. I've been meaning to do it for some time now, just not getting around to it. Need to generate a backup image, then create a snapshot, then upgrade.  If it fubars, roll back the snapshot.

  • The "SWAP" column in top is pretty meaningless. It is actually just the difference between (total memory space requested) and (total physical memory in use) for the process, but it does not actually tell you how much of that memory is, or has ever been, in use, even less how much actual swap space is taken up or how much swap activity is being caused.

    It's quite possible for a program to tell the operating system it needs 2.1Gb of RAM, but for it to never use most of it. The operating system will only allocate real memory (RAM or swap) when it's used, not when it's mapped or allocated. That's why, on your system with 16Gb of physical RAM, it has never actually had to consume any SWAP space.

    The answer to this question on Server Fault explains this more completely. You can see this most clearly by looking at the last two lines of the top output:

    • httpd - VIRT=450m, RES=19m, so SWAP=450m-19m= (approx) 430m
    • httpproxy - VIRT=1526m, RES=1.2g, so SWAP= 1526m-1.2g=(approx) 334m

    Where

    • VIRT is 'virtual memory space size' or in other words 'how much memory has the program asked for'
    • RES is 'resident memory consumption for this process alone' or 'how much actual space is this process really using right now'
    • SHR is 'memory in use that is or could be shared with other processes' or 'how much space is this process using that's also common to another process'