Server replication issue "Out of memory!" on 2Gb server!

Hi. I have an issue with replication of a site. It worked fine for a few times, and now it wont, it fails, saying Out of memory! Initially the server was 1Gb and it worked OK. When it started to fail with Out of memory I upgraded the server to 2Gb, but it still fails.

I cannot believe 2Gb Ram is not enough, so is there some other issue? Its only 1 site, less than 1Gb, and it did work on a 1Gb server when the site was over 2Gb before I deleted some files.


Starting replication from of virtual servers
Finding source and destination systems …
… found source and destination
Refreshing domains on source system …
… done

Creating temporary directories …
… done

Backing up 1 virtual servers on source system …
… created backup of 777.99 MB

Transferring backups to destination systems …
… done

Restoring backups on destination systems …
… 0 restores succeeded, 1 failed

Failed to restore on : Checking for missing features … … all features in backup are supported Checking for errors in backup … … no errors found Starting restore… Extracting backup archive files … … done Restoring backup for virtual server … Restoring virtual server password, quota and other details … … done Updating administration password and quotas … … done Restoring Cron jobs … … done Extracting TAR file of home directory … … done Setting ownership of home directory … … done Out of memory!

Replication failed - see the output above for the reason why.

For info, Webmin says 22% used of Real Memory on the target server. 45% local disk space used.


Hmm, what is the output of this command:

free -m

That will show what the Linux kernel says the RAM situation there looks like.

Also, how large is the Virtual Server that you’re replicating?


The virtual server I am replicating is only 936M in total (using du command in root)

The source server memory:

[root@source]# free -m
total used free shared buff/cache available
Mem: 3950 1266 531 484 2151 2136
Swap: 255 199 56

Target Server
[root@target ~]# free -m
total used free shared buff/cache available
Mem: 2001 351 1037 38 611 1461
Swap: 0 0 0

The Cloudmin server

[root@cloudmin ~]# free -m
total used free shared buffers cached
Mem: 3950 3248 702 0 236 1528
-/+ buffers/cache: 1483 2467
Swap: 255 254 1

Which system do you think is Out of Memory? Could be Cloudmin or the target?

If I replicate the same site to a 1Gb server I have, it works OK. So this “Out of memory” report is a bit strange I think.

The source and cloudmin servers are on Linode, and the target is on Digital Ocean.

Anyone have any idea of this?

So this happens every night. The initial replication that worked shows fine when our failover DNS kicks in, so any new replication each night is just not working.

I really need to get to the bottom of this though, I cant see it being a memory issue, woudl be great if I could access some more descriptive logs than just out of memory??


It looks like the issue is on, your target server.

That particular server has 2GB of RAM, and no swap.

Would it be possible to add some swap space to that particular server? Or even some additional RAM? That would give it some more breathing room during the restore process.