Sizing a NetWorker Server and Storage Nodes
Several factors affect a NetWorker server performance, including CPU, memory, paths and IO bandwidth and network bandwidth. Of these, I/O is the most important because a backup server is mainly concerned with ingesting data from a client and outputting it to the backup server. A fast CPU is good, but scalability considerations around I/O dictate that it is generally less expensive to add NICs or HBAs to increase I/O than in smaller form-factor servers with less ability to upgrade.
Sun Microsystems has determined that there is a general need for 5MHz of CPU per 1MB/second of data movement in and out of a NetWorker storage node. Therefore, 1 GHz is required to move 100 MB/sec (500MHz ingest, 500 MHz to move from storage node to device). OS typically does not play a major part in performance tuning.
Memory is determined by number of clients. Less than 50 clients only requires 4GB RAM, with 8GB required for 51-150 clients. More than 150 clients would take 16GB, but remember to account for other apps and configuration on the OS. Continually monitor the page file and minimize swapping wherever possible.
nsrd, nsrexecd, save each take about 16MB of memory. savegrp requires 6.5 MB per running process.
Internal bus is the single most important component. It ties all internal components together, and the type of bus, the data width, the clock rate and motherboard all play a part in bus speed. When calculating bus speed, make sure you have sufficient bus width for all devices. Look for separate buses when possible, and the more buses the better. Faster bus does not guarantee faster performance. Bus is normally the bottleneck, so look for PCIexpress for both server and storage nodes.
NetWorker catalog sizing: Catalog Size = (n + (i*d))*c*160*1.5
- n = number of files to back up
- d = days between full backup (how many incrementals?)
- i = incremental data change rate
- i = n*(% data changes as a decimal)
- c = number of backup cycles
- 160 = average estimate of each catalog entry
- 1.5 = multiplier for growth
The catalog location can be changed easily. It can go on direct attached storage, but it causes trouble with growth, or on SAN attached storage to leverage DR and growth capability.
Storage performance is measured in I/O operations per second, and if there are not sufficient IOPS, performance will degrade. Generally, problems will occur if IOPS fall below 50% required. To optimize for less IOPS required, spread backup time out, minimize external reporting and run index and bootstrap backup outside the backup window because they come from the NetWorker server itself.
Minimum IOPS for NetWorker server is 30, with increases coming with load. Divide the number of concurrent sessions by 3 and add to the base to accommodate for parallelism, and add 50, 150, 400 for small, medium, large server bootstrap and index backups taking place concurrently. Also, other factors will contribute, such as NDMP requiring a large number of IOPS due to processing, and large volume counts, or lots of servers starting backup at the same time.
Monitor I/O, disk, CPU and network using DPA, Perfmon, iostat, vmstat, netstat to monitor performance. Look for unusual activity before, during and after backup. Monitor over time to maintain a baseline to identify trends.
Use FTP to isolate network from tape devices. Also, compare passive vs. active FTP to determine if packet filtering is in play.
NetWorker comes with 2 tools: Bigasm and Uasm. Bigasm tests performance of client, network and tape, removing disk delay by generating a large amount of data in RAM and transfers it over the network.. Uasm measures disk speed by actually writing data do a null device.