Skip to main content
replaced http://superuser.com/ with https://superuser.com/
Source Link

(This would ideally be a reply to joshtronic's commentjoshtronic's comment)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value

In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

(This would ideally be a reply to joshtronic's comment)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value

In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

(This would ideally be a reply to joshtronic's comment)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value

In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

fixed quote: it was unintentionally including subsequent text. Added link to referenced comment as a single-char edit isn't allowed.
Source Link

(This would ideally be a reply to joshtronic's comment abovejoshtronic's comment)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

(This would ideally be a reply to joshtronic's comment above)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

(This would ideally be a reply to joshtronic's comment)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value

In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

Fix parentheses
Source Link
supervacuo
  • 531
  • 5
  • 10

(This would ideally be a reply to joshtronic's comment above)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average (as— as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.)

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

(This would ideally be a reply to joshtronic's comment above)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average (as Rsync Basics helpfully explains

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.)

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

(This would ideally be a reply to joshtronic's comment above)

--bwlimit=XX in fact has the opposite problem; the transfer is indeed a moving average — as Rsync Basics helpfully explains:

Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit.

It's not clear whether the average is taken across files, but in any case it is not true that

the first file is sent full blast and subsequent files are throttled to attempt to get down to the specified bandwidth value In fact the first file will be throttled as long as it is large enough for the averaging to kick in (which means all but the smallest files).

You're right that trickle would be a better solution, but what I understand from the explanatory paper ("Trickle: A Userland Bandwidth Shaper for Unix-like Systems") is that trickle also works by delaying I/O based on a moving transfer average. I guess the hope in recommending it is that it uses a higher-frequency measurement to apply the average. I haven't been able to find any data online that confirms this to be the case (although the paper above does refer to rsync's code as "simple", suggesting the authors of trickle think theirs does a better job).

Source Link
supervacuo
  • 531
  • 5
  • 10
Loading