Please see What is the data compression ratio of Firestreamer?
In short, it very depends on the actual data. In the demo video the compression ratio reaches 28:1 for the DPM's own database.
Data Compression - Firestreamer vs. LTO-drives
Hi John
Yes of course it is, JPEGs less that etc. a TXT-file that cotains 1GB of AAAAAAAs :-)
In your software is there a delay/slow-down when using Compression vs. non-compression? 5%? 15%? 25%? (not that you will be hold accounted for it)
A large server and lots of CPUs and memory will maybe help you? :-p
Best regards
Lars Kot
Yes of course it is, JPEGs less that etc. a TXT-file that cotains 1GB of AAAAAAAs :-)
In your software is there a delay/slow-down when using Compression vs. non-compression? 5%? 15%? 25%? (not that you will be hold accounted for it)
A large server and lots of CPUs and memory will maybe help you? :-p
Best regards
Lars Kot
Firestreamer uses the NTFS compression that is built into Windows. In our observation, uncompressible data causes a higher CPU usage.
A faster CPU will reduce the time needed to compress data. The amount of system memory has no effect on the compression. Multiple CPUs will improve the performance of multiple simultaneously working virtual tape drives (Firestreamer creates a separate thread for each tape drive, plus one additional thread per library).
A faster CPU will reduce the time needed to compress data. The amount of system memory has no effect on the compression. Multiple CPUs will improve the performance of multiple simultaneously working virtual tape drives (Firestreamer creates a separate thread for each tape drive, plus one additional thread per library).
Best regards,
John Smith
Cristalink Support
John Smith
Cristalink Support
Hi John
With short-term disk that is a LUN on 45 pcs. 2TB SATA disks (in 5 x 8+1 (Data+Parity) RAID5).
And long-therm beeing another LUN on another 45 pcs. 2TB SATA disks (in 5 x 8+1 (Data+Parity) RAID5) - another disk-shelf (but same SAN).
What should the MB/sec througput be overnight when copying/transfering data to FSRM-files / "tapes"?
Currently we are getting apx. 21MB/sec with "firestreamer/DPM compression enabled" - but I do not think this is very much?
What are the perfered NTFS block size for your firestreamer software, for the LUN holding the FSRM files? 8K? 64K?
Best regards,
Lars
With short-term disk that is a LUN on 45 pcs. 2TB SATA disks (in 5 x 8+1 (Data+Parity) RAID5).
And long-therm beeing another LUN on another 45 pcs. 2TB SATA disks (in 5 x 8+1 (Data+Parity) RAID5) - another disk-shelf (but same SAN).
What should the MB/sec througput be overnight when copying/transfering data to FSRM-files / "tapes"?
Currently we are getting apx. 21MB/sec with "firestreamer/DPM compression enabled" - but I do not think this is very much?
What are the perfered NTFS block size for your firestreamer software, for the LUN holding the FSRM files? 8K? 64K?
Best regards,
Lars
To our experience, RAID5 is slow. Try to copy a file directly to your drive with Windows Explorer. Firestreamer's performance without compression should be approximately the same. If it is, and you enabled compression, then what you have is likely to be the best you can get.
The bigger the block size, the better. Smaller block sizes are for small files.
The bigger the block size, the better. Smaller block sizes are for small files.
Best regards,
John Smith
Cristalink Support
John Smith
Cristalink Support
-
- Posts: 6
- Joined: 07 Feb 2012, 09:45
This is an old thread, but our experience may help others :)
We are using FS to replace an LTO-3 scsi loader which ist just plain unstable under Windows 2008 (everyone outside of MS blaming the MS SCSI driver architecture in 2008. . .)
As compression will always be a relatively individual issue, I am not promising anything, just reporting our experiences.
We use on-wire compression from DPM and have tape compression turned on. Tape compression seems to result in very heavy loading of CPU0/core0 and frequent 100% CPU usage "spikes" on all other cores during tape backups, however the results are definately worth it, but it would be nice to offload this somehow. . . (see my other post)
Here are examples of how much data DPM has written to our "Tapes" (20GB filesize) with various backup contents:
86,369 MB: Directory containing MS-SQL DB files
76,072 MB: SQL DB object
57,884 MB: SQL DB object
52,682 MB: 2 VHDs + server system Protection
50,598 MB: Sharepoint Child Partition snapshot
48,651 MB: 50% Server file system + 50% VHDs
34,306 MB: 60% server file system + 40% VHDs
- - - Following "Tapes" not full - - -
26,698 MB: Server File system (drive with SAP kernel directories) "Tape" filesize: 12,988 MB
21,749 MB: Exchange Server filesystem - "Tape" filesize: 12,715 MB
16,077 MB: Server File system - "Tape" filesize 10,917 MB
FYI, the backup "tasks" (the complete set of individual jobs belonging to a protection group "task") also run around 30% faster owing to "parallel" "tape" backups using 2-3 "drives". We tried using 4 "drives" but this resulted in us flooding the poor old DPM server and it's NICs, the QNAP etc. and DPM getting a little confused reporting that tape jobs failed owing to consistent RPs not being available. When repeated manually after manual RP synch, the failed jobs all ran without issues. Reduced to 2 "drives", that protection group "task" also repeated without issues.
Hope that this is of interest and help.
John. :)
We are using FS to replace an LTO-3 scsi loader which ist just plain unstable under Windows 2008 (everyone outside of MS blaming the MS SCSI driver architecture in 2008. . .)
As compression will always be a relatively individual issue, I am not promising anything, just reporting our experiences.
We use on-wire compression from DPM and have tape compression turned on. Tape compression seems to result in very heavy loading of CPU0/core0 and frequent 100% CPU usage "spikes" on all other cores during tape backups, however the results are definately worth it, but it would be nice to offload this somehow. . . (see my other post)
Here are examples of how much data DPM has written to our "Tapes" (20GB filesize) with various backup contents:
86,369 MB: Directory containing MS-SQL DB files
76,072 MB: SQL DB object
57,884 MB: SQL DB object
52,682 MB: 2 VHDs + server system Protection
50,598 MB: Sharepoint Child Partition snapshot
48,651 MB: 50% Server file system + 50% VHDs
34,306 MB: 60% server file system + 40% VHDs
- - - Following "Tapes" not full - - -
26,698 MB: Server File system (drive with SAP kernel directories) "Tape" filesize: 12,988 MB
21,749 MB: Exchange Server filesystem - "Tape" filesize: 12,715 MB
16,077 MB: Server File system - "Tape" filesize 10,917 MB
FYI, the backup "tasks" (the complete set of individual jobs belonging to a protection group "task") also run around 30% faster owing to "parallel" "tape" backups using 2-3 "drives". We tried using 4 "drives" but this resulted in us flooding the poor old DPM server and it's NICs, the QNAP etc. and DPM getting a little confused reporting that tape jobs failed owing to consistent RPs not being available. When repeated manually after manual RP synch, the failed jobs all ran without issues. Reduced to 2 "drives", that protection group "task" also repeated without issues.
Hope that this is of interest and help.
John. :)