How can I reduce resource usage when copying a large file?

Posted on

How can I reduce resource usage when copying a large file? – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.

But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, filesystems, rsync, copy, .

I need to move a large file (corrupt MySQL table ~40GB) onto a seperate server in order to repair it. (When trying to repair on my production server, it quickly killed the server).

In order to do this, I want to rsync the .frm, .MYI and .MYD files from my production server to a cloud server.

I am copying the files from /var/lib/mysql/{database}/ to /home/{myuser} so that I don’t need to enable root access for the rsync command and be 100% sure the database file isn’t in use (it shouldn’t be written to or read from, but obviously I don’t want to shut down my production database to make sure).

The first file I tried to copy was around 10GB. I am transfering from one part of my production server to the other, i.e. to the same array of disks.

Unfortunately the copy command “cp filename newfilename” took so much resources it brought the server to a standstill.

How can I use less resources when copying the file to a different directory? (It doesn’t really matter how long it takes).

Assuming I manage to do this, what resource usage can I then expect when rsyncing the file to the cloud?

Can anyone suggest a better way to do this? I am quickly running out of disk space so need to get this table repaired and archived ASAP.

Solution :

Have you tried nice -n10 prefixed to the command ?

10 is default value. The range goes from -20 (highest priority) to 19 (lowest).

Two choices besides the rsync bandwidth limitation:

  • ionice -c 3 cp foo bar
  • buffer -u 150 -m 16m -s 100m -p 75 -i foo -o bar

ionice will interface with the I/O scheduler. buffer is a circular buffer meant to aid character devices in being more efficient, but the -u 150 will pause 150 microseconds between writes which, according to the manual, may be enough to give a disk room to breathe.

Both ionice and buffer are available in a stock Ubuntu build. iotop is handy if you happen to have CONFIG_TASK_DELAY_ACCT configured in your kernel, but my Ubuntu box did not which severely limits the usability of the command. I already know which command is drowning my hard drive, I just want to give it some breathing room.

Additionally, while the copy is in progress, look at the output of iostat -x 1 (usually in the sysstat package) and see that the %busy field for your device is 90% or less during the copy. If it is at 99-100%, then you are starving other processes for I/O.

use rsync with the –bwlimit=KBPS switch (limit I/O bandwidth; KBytes per second). play around with a smaller file and try to find optimal mix between transfer speed and system usage. Monitor in second shell with “vmstat 1”

One alternative is :

scp -l ${KBPS} ${src} ${dest}

But I dont think it will work if ${src} is a growing file … can you suggest a way to copy and wait source to be closed …

Leave a Reply

Your email address will not be published. Required fields are marked *