I manage many highly “volatile” (changing) websites that require manual editing with Virtualmin. It has become incontrollable to create local users with same UIDs as the server to remote NFS to the public_html to access files and edit them.
Is there a way for me to develop locally under my Ubuntu (10.04.1 LTS) user account to remote publish to the virtual server’s users credentials? (running virtualmin pro under CentOS 5.4)
I am trying to do web development on many many sites everyday. All of this data needs a manual editing for some reason.
The fact is that under normal circumstances, under linux, you can connect to nfs share if you have the same user/group on both systems.
This forces me to logout/log back in as many users in a day.
Add to that the fact that UIDs and GIDs are difficult to keep the same on both my dedicated server and my work box (different OSs, different setups). This has become unmanageable.
I need a solution to develop locally in my computer, whatever the websitem and to publish on my virtualmin pro webserver. All this from my local account.
What about using SCP or FTP, or maybe Samba, to upload the changed files to the life site? With those protocols, you can either specify or configure which life-server-local user (ID) is used for which remote-access username.
Well, there’s the “Cluster Copy Files” function, I suppose that’s as close as you can get to a VMin-built-in file synchronization. But I have my doubts that this will do exactly what you’re looking for. It seems e.g. that function does not preserve file date, owner and access mode. (It’s meant for different things than file sync.)
will recursively copy changed files in the source to the destination, under username “remoteuser”, preserving file dates, executability and permissions. I suppose it can’t get any easier than that.
@Duke: Yepp that’s right, it is. Locutus of Borg, assimilation in progress.
Bringing the file at 500Kb/sec took a about a minute for 25 Mb.
Changing nothing and resyncing right away in the same direction took a few seconds. Normal, nothing had changed. The other way took longer but it’s because of my slow upload speed.
I changed 1 small file somewhere…
Resynced with:
rsync -vzatpe ssh /home/local_path/domain.tld user@domain.tld:/home/remote_path/public_html/
The rsync the other way around took a long time at 80Kb/sec. And it re-uploaded all the files. AND NOT EVEN THE ONE I MODIFIED!!!
Not wanting to waiste my time again, I tested this with a dry run:
rsync -nvuzatpe ssh /home/local_path/domain.tld user@domain.tld:/home/remote_path/public_html/
It showed smaller send/receive data so I tested again with: (without dry run)
rsync -vuzatpe ssh /home/local_path/domain.tld user@domain.tld:/home/remote_path/public_html/
To end up with my modified file still local but not remote.
Tried again with:
rsync --size-only -vuzatpe ssh /home/local_path/domain.tld user@domain.tld:/home/remote_path/public_html/
to no avail… My modified file is still not on the remote server.
Tried multiple switches and ended up with this:
rsync --delete --size-only -vuzr -e ssh /home/local_path/domain.tld user@domain.tld:/home/remote_path/public_html/
It still doesn’t work.
The only file that I modified is not re-uploaded.
I am able to reup everything, but with my 80K/sec connection, I would prefer if just the modified files were uploaded.
I’v got it to sync only modified files. But it also updates some other files I haven’t updated locally.
Is it possible it’s because apache touches the files on the servers… Thus rsync thinks they are modified becaude not identical to the local copy… Therefore rsync sends thos again?
Took about 5 seconds. (Both machines are VMs on the same host, and it was about 72 MB to copy). Then I modified a file in /test/html and added two new ones. Then:
Yes… but with one problem. On the sync back to where the files were originally… I had to remove the last dir in the command otherwise all files were sent into a subdir bearing the same name as the dir.
And the -a option is a nuisance when the system times (files timestamps) are different for some reasons.
Here are the commands that worked:
fetch the original files from the prod server:
rsync -e ssh -avz user@domain.tld:/home/remote_path/public_html/catalog /home/local_path/domain_catalog/
put the modified files back:
rsync -e ssh -avz --exclude “catalog/system/cache” /home/local_path/domain_catalog/catalog user@domain.tld:/home/remote_path/public_html/
See how I had to remove the “catalog” dir in the “put back” command? If I don’t… It creates another catalog folder in the original catalog folder and re-uploads all the files.