[Unison-hackers] Memory exhaustion issue (#1068)

Michael von Glasow michael at vonglasow.com
Sat Nov 23 16:26:43 EST 2024


On 23/11/2024 23:01, Greg Troxel wrote:
> Michael von Glasow<michael at vonglasow.com> writes:
>
>> Switching profiles should be sufficient – this created a new unison
>> process on the server, while the old one gradually freed up its memory
>> (but kept running).
> Sorry, I guess I was unclear.  I am not really looking for just
> sufficient to make your case, but the simplest possible way to reproduce
> the problem, expressed programmatically so that others can run it (after
> reading the code to feel it is safe).  So that's no GUI, no persistent
> server, and everything in the profile to be synced created by the
> script.
I‘ll look into it, but reproducing would also involve creating a huge
set of files to sync.
> For comparing with other people, I think it will be more useful to talk
> about KB or MB of memory usage vs %.  Lots of people will have different
> amounts of RAM and other loads, although 1 GB such as RPI3 is pretty
> common.
In principle, +1. However, the tools I have spit out percentages. One %
is roughly 10 MB, so the math is fairly easy.
> I personally am not interested in debugging anything older than 2.53.7
> (or 6, but there is no reason to use 6 if you are compiling).
You do you, but the trouble is that distro repositories are a bit slow
to catch up. Part of the whole update exercise was so I could finally
return to Unison from the distro repos and not have to build my own any
more. Especially since building for non-Intel platforms is quite a
hassle – I tried to get that on CI some time ago, but my attempts
stalled as setting up a cross-compilation environment proved challenging.
> So it sort of sounds like memory is allocated proportional to the size
> of the transferred file, and not freed.
>
> And, that other memory uesd for scan/etc. is reused.
It is roughly proportional to the size of the transferred file, although
16000M uses more in comparison than 12800M does (roughly 55M vs. 34M).
Also, archive file size seems to matter.
>> Looking at the docs, what comes to mind is:
>>
>> - copyprog, copyprogthreshold (use external program <copyprog> for
>> copying files larger than <copyprogthreshold> kB)
> That is about to be deleted.
That would be too bad, because that seems to be a valid workaround. I
added `copythreshold = 163840` (1% of 16G, roughly where the issues
started) to my profile and was able to sync.
>> Or is there a way to tell Unison to stop being smart and just copy the
>> damn thing (which is presumably less memory-hungry) if a file is larger
>> than a certain size?
> I don't think so, but really that should not be necessary.  If there is
> code that uses memory when it shouldn't, we should find that and fix it.
For “cut the smartness above file size X”, `copythreshold` (not
`copyprogthreshold`, that was a typo) seems to do the trick. Please keep
that feature until memory efficiency is improved, unless there is an
alternative.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://LISTS.SEAS.UPENN.EDU/pipermail/unison-hackers/attachments/20241123/d13201a1/attachment.htm>


More information about the Unison-hackers mailing list