How-To : Symantec Netbackup – Increase Number of Days in Activity Monitor Job History

By default (in Netbackup 6.5.x) Netbackup will only keep 78 hours (3.25 days) worth of job history under the activity monitor. For some environments this is not enough so this lesson will show how to increase this setting so that more days are kept in the activity monitor.

Query the Current Setting

media_1263671092837.png

To see what the current setting is, you can use the command:
bpgetconfig -M masterserver KEEP_JOBS_SUCCESSFUL_HOURS
This command is located under: <InstallDir>binadmincmd
As you can see in the screenshot, the default setting is 78 hours (3.25 days).

Changing/Increasing the Setting

media_1263671519337.png

Make a simple text file on the Netbackup Master server and put the following 2 lines in it:
KEEP_JOBS_SUCCESSFUL_HOURS = new-number
KEEP_JOBS_HOURS = new-number
Where "new-number" is, enter the new number/amount of hours that you want to keep in the Activity monitor.
In my example, I’ll use 128, which equals a little over 5 days.

Update Setting on Master Server

media_1263671989945.png

Now, to update the setting on the Master server, run the command:
bpsetconfig -h masterserver mytxtfilename.txt
Replace masterserver with your master server name.
Replace mytxtfilename.txt with the location and the name of the text file that you created in the previous step.
In my example, the command is:
bpsetconfig -h masterserver d:myfile.txt

Restart Netbackup Services

media_1263672410011.png

In order to apply the changes, you’ll need to restart the Netbackup services. Make sure no backup or restore jobs are running before you restart services.

To do this on a Windows master, you can use the following commands to achieve this:
bpdown
bpup
These scripts are located under the <installpath>bin directory.

To do this on a Linux master, you can use the following commands to achieve this:
service netbackup stop
service netbackup start

To do this on a Solaris master, you can use the following commands to achieve this:
netbackup stop
netbackup start

After the services are restarted, you can start the Netbackup Administration Console and you should now start seeing additional days worth of jobs kept in the Activity Monitor.

Netbackup 7 First Availability (FA) Released

Symantec released the Netbackup 7 First Availability (FA) code yesterday!

More info about the FA code (which is the same as the General Availability or GA code) can be found here:

http://seer.entsupport.symantec.com/docs/336166.htm

There is a link at the bottom to sign up for the FA program, although since now that it’s already released I don’t know how they are handling signups that are new. We should see the GA release announcement soon!

Here’s some information regarding Netbackup 7:

http://seer.entsupport.symantec.com/docs/334273.htm = Back-Level Media Server and Client Versions that are Supported

New Features of Netbackup 7 – Taken from the FA Release Notes

Symantec NetBackup increase the scalability and functionality of NetBackup for its large enterprise customers. The following list highlights some of the major features that comprise NetBackup 7.0

Integrated deduplication:

■ Client deduplication

■ Media server deduplication 

OpsCenter:

■ Convergence of NOM and VBR

■ Improved ease-of-use

■ Additional base reporting functionality

■ Direct upgrade path

■ Expanded and simplified deployment

OpsCenter Analytics:

■ Turnkey operation to enable advanced, business reporting

■ Improved reporting features

NetBackup for VMware:

■ Fully integrated with VMware vStorage APIs

■ New VM and file recovery from incremental backups

■ New recovery wizard

■ Tighter vCenter integration

Guided application recovery for Oracle databases

New database agents and enhancements to existing agents:

■ Exchange 2010 support

■ Windows 7/2008R2 client support

■ Support all Enterprise Vault 8.0 databases using the Enterprise Vault Agent including the Fingerprint database, FSA Reporting database, and Auditing database

Enterprise Vault 8.0 support for granular quiescence

Enterprise Vault Migrator is now included with NetBackup

Enhancements to the NetBackup Enterprise Vault agent

Enhancements to the Netbackup for Hyper-V agent

■ New file-level incremental backups

■ Two types of off-host backup

■ Hyper-V R2 and Cluster shared volumes (CSV)

■ Flexible VM identification

Improved SharePoint Granular Recovery

This release adds the following proliferation support:

■ HP-UX 11.31 IA BMR master server support for NetBackup 7.0

■ BMR WINPE 2.1 support

■ Java user interface support on RHEL 5 systems

■ NetBackup Support Utility (nbsu) enhancements

A Little About Netbackup Deduplication (From the Netbackup 7 FA Release Notes):

NetBackup now provides integrated data deduplication within NetBackup. You do not have to use the separate PureDisk interface to deduplicate data. You use NetBackup interfaces to install, configure, and manage storage, servers, backups, and deduplication. Symantec packaged PureDisk into a modular engine, and NetBackup uses the PureDisk Deduplication Engine (PDDE) to deduplicate backup data.

You can use NetBackup deduplication at 2 points in the backup process:

■ On a NetBackup client. The NetBackup client deduplicates its own data and then sends it directly to a storage server, bypassing media server processing. Because unique data segments only are transferred, this option reduces network load.

■ On a media server. NetBackup clients send their backups to a NetBackup media server, which deduplicates the backup data. The media server writes the data to the storage and manages the deduplicated data.

VMware Consolidated Backup and User Access Control

Working with a customer recently I got to spend some quality time troubleshooting VMware Consolidated Backup framework.  Generally VCB is a very straight forward install and it pretty much “just works” – which made my recent experience very atypical (in my experience anyway).

Here’s the setup – we have a group of ESX servers, and a Windows Server 2008 Standard 64-bit system, all attached to the same Fibrechannel SAN, with everything zoned properly.  VCB is installed on the 2K8 system.  The 2K8 OS sees all of of the VMFS luns which are presented to it.  We are using Win2K8′s native MPIO stack.

Running VCB Mounter in SAN mode returns an error that there is no path to the device where the VM is stored.  Running it in NBD mode works great…except that it passes all of the traffic over the network which is not desirable.

Again, diskpart, and the disk Management MMC see all of the LUNs with no issues.

VCB’s vcbSanDbg.exe utility however see no storage.  None at all.

We tried various options – newer and older versions of the VCB framework (btw, only the latest 1.5 U1 version of VCB is supported on Win2K8).  We tried various ways of presenting the storage.  We even tried presenting up some iSCSI storage thinking maybe it was an issue with the systems’ HBA’s.

Ok, if you’ve read the subject of this post then you already know the answer.  In case you didn’t here it is – the system has User Access Control (UAC) enabled.   The user we’re running the framework as is a local administrator on the proxy, but that’s not enough to allow it to properly enumerate the disk devices.   In order for the VCB framework to work you either have to run it in a command window with the “run as administrator” option,  or turn off UAC on the server.  The former can be a little tricky to accomplish if you’re wanting to run the framework from inside a backup application, while the latter seems to be the most common approach.

That’s it.  Turn off UAC and reboot the computer.  Now VCB works great.

Why is my backup running slow?

Backup systems, while a necessary part of any well managed IT system, are often a large source of headaches for IT staff. One of the biggest issues with any back system is poor performance. It is often assumed that performance is related to the efficiency of the backup software or the performance capabilities of backup hardware. There are, however, many places within the entire backup infrastructure that could create a bottleneck.
Weekly and nightly backups tend to place a much higher load on systems than normal daily activities. For example a standard file server may access around 5% of its files during the course of a day but a full backup reads every file on the system. Backups put strain on all components of a system from the storage through the internal buses to the network. A weakness in any component along the path can cause performance problems. Starting with the backup client itself, let’s look at some of the issues which could impact backup performance.

  • File size and file system tuning
  • Small Files

A file system with many small files is generally slower to back up than one with the same amount of data in fewer large files. Generally systems with home directories and other shares which house user files will take longer to back up than database servers and systems with fewer large files. The primary reason for this is due to the overhead involved in opening and closing a file
In order to read a file the operating system must first acquire the proper locks then access the directory information to ascertain where the data is located on the physical disk. After the data is read, additional processing is required to release those locks and close the file. If the amount of time required to read on block of data is x, then it is a minimum of 2-3x to perform the open operations and x to perform the close. The best case scenario, therefore, would require 4x to open, read and close a 1 block file. A 100 block file would require 103x. A file system with a 4 100 block files will require around 412x to back up. The same amount of data stored in 400 1 block files would require 1600x or about 4 times as much time.

So, what is the solution? Multiple strategies exist which can help alleviate the situation.
The use of synthetic full backups only copies the changed files from the client to the backup server (as with an incremental backup) and a new full is generated on the backup server from the previous full backup and the subsequent incrementals. A synthetic full strategy at a minimum requires multiple tape drives and disk based backup is recommended. Adequate server I/O performance is a must as well since the creation of the synthetic full requires a large number of read and write operations.
Another strategy can be to use storage level snapshots to present the data to the backup server. The snapshot will relieve the load from the client but will not speed up the overall backup as the open/close overhead still exists. It just has been moved to a different system. Snapshots can also be problematic if the snapshot is not properly synchronized with the original server. Backup data can be corrupted if open files are included in the snapshot.
Some backup tools allow for block level backups of file systems. This removes the performance hit due to small files but requires a full file system recovery to another server in order to extract a single file.
Continuous Data Protection (CDP) is a method of writing the changes within a file system to another location either in real time or at regular, short intervals. CDP overcomes the small file issue by only copying the changed blocks but requires reasonable bandwidth and may put an additional load on the server.
Moving older, seldom accessed files to a different server via file system archiving tools will speed up the backup process while also reducing required investment in expensive infrastructure for unused data.

  • Fragmentation

A system with a lot of fragmentation can take longer to back up as well. If large files are broken up into small pieces a read of that file will require multiple seek operations as opposed to a sequential operation if the file has no fragmentation.
File systems with a large amount of fragmentation should regularly utilize some sort of de-fragmentation process which can impact both system and backup performance.

  • Client throughput

In some cases a client system may be perfectly suited for the application but not have adequate internal bandwidth for good backup performance. A backup operation requires a large amount of disk read operations which are passed along a system’s internal bus to the network interface card (NIC). Any slow device along the path from the storage itself, through the host bus adapter, the system’s backplane and the NIC can cause a bottleneck.
Short of replacing the client hardware the solution to this issue is to minimize the effect on the remainder of the backup infrastructure. Strategies such as backup to disk before copying to tape (D2D2T) or multiplexing limit the adverse effects of a slow backup on tape performance and life. In some cases a CDP strategy might be considered as well.

  • Network throughput

Network bandwidth and latency can also affect the performance of a backup system. A very common issue arises when either a client or media server has connected to the network but the automatic configuration has set the connection to a lower speed or incorrect duplex. Using 1Gb/sec hardware has no advantage when the port is incorrectly set to 10Mb/half duplex.
Remote sites can also cause problems as those sites often utilize much slower speeds than local connections. Synthetic full backups can alleviate the problem but if there is a high daily change rate may not be ideal. CDP is often a good fit, as long as the change rate does not exceed the available bandwidth. In many cases a remote media server with deduplicated disk replicated to the main site is the most efficient method for remote sites.

  • Media server throughput

Like each client system the media server can have internal bandwidth issues. When designing a backup solution be certain that systems used for backup servers have adequate performance characteristics to meet requirements. Often a site will choose an out of production server to become the backup system. While such systems usually meet the performance needs of a backup server, in many cases obsolete servers are not up to the task.
In some cases a single media server cannot provide adequate throughput to complete the backups within required windows. In these cases multiple media servers are recommended. Most enterprise class backup software allows for sharing of tape and disk media and can automatically load balance between media servers. In such cases multiple media servers allow for both performance and availability advantages.

  • Storage network

When designing the Storage Area Network (SAN) be certain that the link bandwidth matches the requirements of attached devices. A single LTO-4 tap drive writes data at 120MB/sec. In network bandwidth terms this is equivalent to 1.2Gb/sec. If this tape drive is connected to an older 1Gb SAN, the network will not be able to write at tape speeds. In many cases multiple drives are connected to a single Fibre Channel link. This is not an issue if the link allows for at least the bandwidth of the total of the connected devices. The rule of thumb for modern LTO devices and 4Gb Fibre Channel is to put no more than 4 LTO-3 and no more than 2 LTO-4 drives on a single link.
For disk based backup media, be certain that the underlying network infrastructure (LAN for network attached or iSCSI disk and SAN for Fibre Channel) can support the required bandwidth. If a network attached disk system can handle 400MB/sec writes but is connected to a single 1Gb/sec LAN it will only be able to write up to the network speed, 100MB./sec. In such a case, 4 separate 1Gb connections will be required to meet the disk system’s capabilities.

  • Storage devices

The final stage of any backup is the write of data to the backup device. While these devices are usually not the source of performance problems there may be some areas of concern. When analyzing a backup system for performance, be sure to take into account the capabilities of the target devices. A backup system with 1Gb throughput throughout the system with a single LTO-1 target will never exceed the 15MB/sec (150Mb/sec) bandwidth of that device.

  • Disk

For disk systems the biggest performance issues is the write capability of each individual disk and the number of disks (spindles) within the system. A single SATA disk can write between 75 and 100 MB/sec. An array with 10 SATA drives can, therefore, expect to be able to write between 750MB/sec and 1GB/sec. RAID processing overhead and inline deduplication processing will limit the speed so except the real performance to be somewhat lower, as much as 50% less than the raw disk performance depending on the specific system involved. When deciding on a disk subsystem, be sure to evaluate the manufacturer’s performance specifications.

  • Tape

With modern high speed tape subsystems the biggest problem is not exceeding the device’s capability but not meeting the write speed. A tape device performs best when the tape is passing the heads at full speed. If data is not streamed to the tape device at a sufficient rate to continuously write, the tape will have to stop while the drive’s buffer is filled with enough data to perform the next write. In order to get up to speed, the tape must rewind a small amount and then restart. Such activity is referred to as “shoe shining” and drastically reduces the life of both the tape and the drive.
Techniques such as multiplexing (intermingling backup data from multiple clients) can alleviate the problem but be certain that the last, slow client is not still trickling data to the tape after all other backup jobs have completed. In most cases D2D2T is the best solution, provided that the disk can be read fast enough to meet the tape’s requirements.

  • Conclusion

In most backup systems there are multiple components which cause performance issues. Be certain to investigate each stage of the backup process and analyze all potential causes of poor performance.

Symantec Declares DeDuplication Everywhere, Backup Exec, Netbackup, Enterprise Vault

A common question about Symantec’s Netbackup and Backup Exec platform are the ability to support deduplication. Netbackup has Netbackup Puredisk functionality but deduplication has been a little lacking in the Backup Exec world. Well, that will soon change.

According to this Symantec press release, the Puredisk deduplication technology will soon make it’s way into Backup Exec, Enterprise Vault and other Symantec products:
http://www.symantec.com/about/news/release/article.jsp?prid=20090707_01

Another post that I found regarding deduplication that might be useful to customers is at:
http://symantec.dciginc.com/2008/11/symantec-netbackup-takes-the-d.html

From the Press Release:
Symantec is moving deduplication closer to information sources by integrating the technology into its information management platforms: NetBackup, Backup Exec and Enterprise Vault, and centrally managing native deduplication as well as third-party deduplication appliances.

Symantec is delivering on its deduplication strategy with a multi-phased approach:

  • Integrated deduplication is available today in NetBackup and Enterprise Vault, offering deduplicated archiving, deduplicated backup storage and global deduplicated remote office backup.
  • NetBackup currently offers integrated, centralized management for third-party deduplicated storage from Data Domain, Quantum, Falconstor and EMC through the OpenStorage API.
  • NetBackup PureDisk 6.6, scheduled to be available later this year, will improve storage efficiency by adding enhanced deduplication for backups of virtual server images.
  • Backup Exec 2010, scheduled to be available later this year, will integrate deduplication (using NetBackup PureDisk technology) into both backup clients and Backup Exec media server.  Backup Exec will also add the OpenStorage API to manage third-party deduplication appliances.
  • NetBackup 7, scheduled to be available in 2010, will integrate deduplication into the backup client and NetBackup media server.