VMware Backups using NetBackup 7

Configuring NetBackup 7 for VMware backup (using vStorage API)

Configure VMware backup host in Netbackup

wpid745-media_1268169003053.png

right-click on master server, select “Properties”

wpid746-media_1268169032231.png

Add VMware Backup Host

wpid747-media_1268169068352.png
wpid748-media_1268169100577.png
wpid749-media_1268169133115.png

Configure Credentials on vCenter

wpid750-media_1268169159126.png
wpid751-media_1268169176234.png
wpid752-media_1268169196136.png
wpid753-media_1268169227149.png
wpid754-media_1268169251033.png

Create the backup policy for Virtual Machine Backup

wpid755-media_1268169558017.png
wpid756-media_1268169599531.png
wpid757-media_1268169638301.png
wpid768-media_1269641712723.png
The parameters shown are not the default but reflect a configuration that seems to be optimal for test environment. Your mileage may vary.
These specific parameters have been changed from the default
Client Name Selection determines how Virtual Machines are identified to Netbackup.  VM Display name option matches the VM name as identified in vCenter
Transfer type determines how VM data is transfered to Netbackup host.  The san option uses Fibre Channel or iSCSI SAN (Note:, LUNs containing VMWare Data Stored must be presented to Netbackup host).  The nbd option resorts to a network copy, should the san option fail.
Existing snapshot handling, when set to Remove NBU, will remove stray NetBackup snapshots from VMs if encountered but ignore all other snapshots.
wpid758-media_1268169703636.png
wpid759-media_1268169735435.png
wpid760-media_1268169765662.png
wpid761-media_1268169782724.png

Configure remaining backup policy options based on backup windows etc.

wpid762-media_1268169805026.png
wpid763-media_1268169823906.png
wpid764-media_1268169840241.png
wpid765-media_1268169856070.png

If options need to be changed (‘cuz mine didn’t work in your environment ;) ) , change on the policy’s attributes window

wpid766-media_1269640695368.png
wpid767-media_1269640972034.png

What’s New in Symantec NetBackup 7

Netbackup 7 is now available for all customers under Support/Maintenance. This lesson will show some useful information on What’s New in NetBackup 7, as presented by the Symantec webinar on February 2nd, 2010. The notes presented under each slide were notes from the webinar presentation including questions and answers from that presentation.

Presented by Symantec, Posted by Lewan & Associates!

media_1265134047412.png

NetBackup 7.0 GA code is also now available on FileConnect.

Reminder: The FA and GA bits are exactly the same.

Upgrade Portal http://entsupport.symantec.com/docs/332137
7.0 Compatibility Lists http://entsupport.symantec.com/docs/303344
7.0 Documentation Links http://entsupport.symantec.com/docs/341274
7.0 Download Information http://entsupport.symantec.com/docs/341272
7.0 Late Breaking News http://entsupport.symantec.com/docs/341271

I encourage you to read “Top Seven Reasons to Move to Symantec NetBackup™ 7”
http://entsupport.symantec.com/docs/341275

NetBackup 7 Main Areas

media_1265130721688.png

Deduplication

media_1265130881473.png

media_1265131007236.png

Puredisk technology now built into Media Server. Can point disk to commodity disk, this is the dedup target.
Client level dedup, built into the Netbackup 7 client. Push out upgrade to clients in order to use.
For customers that use Puredisk already, NBU 7 still works with it through backwards compatibility.
You can dedupe in NBU 7 and also send it to a PureDisk storage pool if that’s needed.
Newer limits, new limit of 32TB of deduplicated data behind each media server (Puredisk pools was 16TB).
Cost for deduplication is an additional license, licensed per front-end capacity (how much data are you protecting).

media_1265131173261.png

Virtual Machine Protection

media_1265131560874.png

Improved recovery process through guided VM recovery. Recover single file from block level backup. No need to stage data for recovery. Integration with vStorage API allows NBU to identify unused blocks, allows to store ~30% less data (option that needs to be enabled on backup).

Virtual Machine Backups

media_1265131969425.png

When using the vStorage API, no Backup Proxy server needed. Can perform block level incremental backup of VM’s, while enabling file level restores.
vStorage API can also backup previous versions of ESX 3.5, without proxy server.

Virtual Machine Restores & Recovery

media_1265132137591.png

Can recover entire VM from "Friday" with 1 step. Don’t need to rebuild each incremental in order to restore.

Optimized Replication

media_1265132402035.png

Example of Storage Lifecycle Policy. Copying data (duplicating or replicating) with different retention times. Can also use an OST aware device to store and replicate data.

NetBackup RealTime

media_1265132508129.png

Application consistant recovery point, from multiple locations. New features in NBU 7.
Customers can also use NBU RealTime to protect/replicate only the NetBackup Catalog to another location for no charge.

NetBackup OpsCenter

media_1265132608356.png

Optional software that is a replacement for Veritas Backup Reporter (VBR) and Netbackup Operations Manager (NOM).
No additional charge for OpsCenter. Included with NetBackup code.
There is a licensed option for advanced analytics, business reporting, 3rd party backup application reporting but the licensed feature is optional.

OpsCenter and OpsCenter Analytics Comparison

media_1265132708791.png

Additional Enhancements

media_1265132771981.png

Simplified licensing for virtual environments, per physical host, no longer cares about O/S on that host.
Terabyte based licensing now also includes clients. Licensed per Front-end data, doesn’t matter how many media servers, tape devices, clients, etc.

Top 7 Reasons to Upgrade

media_1265133059625.png

NetBackup 7 is built on NetBackup 6.5.4 code base.
First Availability program had over 600 customers and partners. Lewan was one of these.

Lewan & Associates – Certified Symantec Platinum Partner!

media_1265134727009.png

Please post any questions that you might have regarding NetBackup 7 on this thread! We’ll do our best to answer them or find an answer for you!
If you’re interested in viewing a NetBackup 7 demo or would like to discuss your plans on moving to NetBackup 7 with our Professional Services Team, please contact your Lewan IT Solutions Sales Representative!

Symantec Backup Exec 2010 and Netbackup 7 Super Post

Here’s a great post which discusses pricing with Backup Exec 2010 and Netbackup 7.

http://www.infostor.com/index/articles/display/4701464753/articles/infostor/storage-management/data-de-duplication/2010/january-2010/symantec-integrates.html

Note, Backup Exec 2010 Trialware is out now. Netbackup 7 First Availability (FA) program is available now, which is the same as the General Availability (GA) code.
Netbackup 7 is now on the Symantec site, with the announcement that General Availability (GA) is: February 1st!
http://www.symantec.com/business/products/family.jsp?familyid=netbackup&inid=us_ghp_promo_hero3_netbackup

Backup Exec 2010 is also available February 1st! and it’s on the Symantec site as well:
http://www.symantec.com/business/products/family.jsp?familyid=backupexec&inid=us_ghp_promo_hero1_backupexec

Update – Ready to start upgrading or testing Backup Exec 2010? Symantec has released trialware as of yesterday so you can start before the actual release on 2/1/10.
http://www.symantec.com/connect/blogs/are-you-ready-upgrade
Symantec’s blog post copied below for your reference, as it has some additional links on it:

Good news! Backup Exec 2010 has launched today with the Trailware available for download from February 1, 2010. The new product encompasses great new features like Deduplication and Unified Archiving. To read the complete list of new features key benefits and to download the trailware, please visit  the Backup Exec website.

Have questions on how to upgrade?  Please refer to this document that walks you through the upgrade process.

Getting back to Connect, we have created some great information in regards to the new product launch. Here is a video that explains the product’s deduplication feature. Check this Group page for finding all the BE 2010 and BESR 2010 information in one place.  Don’t forget to add it to your favourites. 

In the process of your upgrade,  with any questions, comments, suggestions and great information you are sharing with the community regarding the new product, we encourage you to please make sure to tag the content with the appropriate Version (2010), the Topics (Installation, Upgrade etc) and the public group “BE 2010 and BESR 2010 Launch”. It helps us track the common issues and escalate it to support as needed. (end of symantec blog article)

Here are some great screenshots of some slides discussing some of the new features and enhancements to Backup Exec 2010.

How-To : Symantec Netbackup – Increase Number of Days in Activity Monitor Job History

By default (in Netbackup 6.5.x) Netbackup will only keep 78 hours (3.25 days) worth of job history under the activity monitor. For some environments this is not enough so this lesson will show how to increase this setting so that more days are kept in the activity monitor.

Query the Current Setting

media_1263671092837.png

To see what the current setting is, you can use the command:
bpgetconfig -M masterserver KEEP_JOBS_SUCCESSFUL_HOURS
This command is located under: <InstallDir>binadmincmd
As you can see in the screenshot, the default setting is 78 hours (3.25 days).

Changing/Increasing the Setting

media_1263671519337.png

Make a simple text file on the Netbackup Master server and put the following 2 lines in it:
KEEP_JOBS_SUCCESSFUL_HOURS = new-number
KEEP_JOBS_HOURS = new-number
Where "new-number" is, enter the new number/amount of hours that you want to keep in the Activity monitor.
In my example, I’ll use 128, which equals a little over 5 days.

Update Setting on Master Server

media_1263671989945.png

Now, to update the setting on the Master server, run the command:
bpsetconfig -h masterserver mytxtfilename.txt
Replace masterserver with your master server name.
Replace mytxtfilename.txt with the location and the name of the text file that you created in the previous step.
In my example, the command is:
bpsetconfig -h masterserver d:myfile.txt

Restart Netbackup Services

media_1263672410011.png

In order to apply the changes, you’ll need to restart the Netbackup services. Make sure no backup or restore jobs are running before you restart services.

To do this on a Windows master, you can use the following commands to achieve this:
bpdown
bpup
These scripts are located under the <installpath>bin directory.

To do this on a Linux master, you can use the following commands to achieve this:
service netbackup stop
service netbackup start

To do this on a Solaris master, you can use the following commands to achieve this:
netbackup stop
netbackup start

After the services are restarted, you can start the Netbackup Administration Console and you should now start seeing additional days worth of jobs kept in the Activity Monitor.

Netbackup 7 First Availability (FA) Released

Symantec released the Netbackup 7 First Availability (FA) code yesterday!

More info about the FA code (which is the same as the General Availability or GA code) can be found here:

http://seer.entsupport.symantec.com/docs/336166.htm

There is a link at the bottom to sign up for the FA program, although since now that it’s already released I don’t know how they are handling signups that are new. We should see the GA release announcement soon!

Here’s some information regarding Netbackup 7:

http://seer.entsupport.symantec.com/docs/334273.htm = Back-Level Media Server and Client Versions that are Supported

New Features of Netbackup 7 – Taken from the FA Release Notes

Symantec NetBackup increase the scalability and functionality of NetBackup for its large enterprise customers. The following list highlights some of the major features that comprise NetBackup 7.0

Integrated deduplication:

■ Client deduplication

■ Media server deduplication 

OpsCenter:

■ Convergence of NOM and VBR

■ Improved ease-of-use

■ Additional base reporting functionality

■ Direct upgrade path

■ Expanded and simplified deployment

OpsCenter Analytics:

■ Turnkey operation to enable advanced, business reporting

■ Improved reporting features

NetBackup for VMware:

■ Fully integrated with VMware vStorage APIs

■ New VM and file recovery from incremental backups

■ New recovery wizard

■ Tighter vCenter integration

Guided application recovery for Oracle databases

New database agents and enhancements to existing agents:

■ Exchange 2010 support

■ Windows 7/2008R2 client support

■ Support all Enterprise Vault 8.0 databases using the Enterprise Vault Agent including the Fingerprint database, FSA Reporting database, and Auditing database

Enterprise Vault 8.0 support for granular quiescence

Enterprise Vault Migrator is now included with NetBackup

Enhancements to the NetBackup Enterprise Vault agent

Enhancements to the Netbackup for Hyper-V agent

■ New file-level incremental backups

■ Two types of off-host backup

■ Hyper-V R2 and Cluster shared volumes (CSV)

■ Flexible VM identification

Improved SharePoint Granular Recovery

This release adds the following proliferation support:

■ HP-UX 11.31 IA BMR master server support for NetBackup 7.0

■ BMR WINPE 2.1 support

■ Java user interface support on RHEL 5 systems

■ NetBackup Support Utility (nbsu) enhancements

A Little About Netbackup Deduplication (From the Netbackup 7 FA Release Notes):

NetBackup now provides integrated data deduplication within NetBackup. You do not have to use the separate PureDisk interface to deduplicate data. You use NetBackup interfaces to install, configure, and manage storage, servers, backups, and deduplication. Symantec packaged PureDisk into a modular engine, and NetBackup uses the PureDisk Deduplication Engine (PDDE) to deduplicate backup data.

You can use NetBackup deduplication at 2 points in the backup process:

■ On a NetBackup client. The NetBackup client deduplicates its own data and then sends it directly to a storage server, bypassing media server processing. Because unique data segments only are transferred, this option reduces network load.

■ On a media server. NetBackup clients send their backups to a NetBackup media server, which deduplicates the backup data. The media server writes the data to the storage and manages the deduplicated data.

VMware Consolidated Backup and User Access Control

Working with a customer recently I got to spend some quality time troubleshooting VMware Consolidated Backup framework.  Generally VCB is a very straight forward install and it pretty much “just works” – which made my recent experience very atypical (in my experience anyway).

Here’s the setup – we have a group of ESX servers, and a Windows Server 2008 Standard 64-bit system, all attached to the same Fibrechannel SAN, with everything zoned properly.  VCB is installed on the 2K8 system.  The 2K8 OS sees all of of the VMFS luns which are presented to it.  We are using Win2K8’s native MPIO stack.

Running VCB Mounter in SAN mode returns an error that there is no path to the device where the VM is stored.  Running it in NBD mode works great…except that it passes all of the traffic over the network which is not desirable.

Again, diskpart, and the disk Management MMC see all of the LUNs with no issues.

VCB’s vcbSanDbg.exe utility however see no storage.  None at all.

We tried various options – newer and older versions of the VCB framework (btw, only the latest 1.5 U1 version of VCB is supported on Win2K8).  We tried various ways of presenting the storage.  We even tried presenting up some iSCSI storage thinking maybe it was an issue with the systems’ HBA’s.

Ok, if you’ve read the subject of this post then you already know the answer.  In case you didn’t here it is – the system has User Access Control (UAC) enabled.   The user we’re running the framework as is a local administrator on the proxy, but that’s not enough to allow it to properly enumerate the disk devices.   In order for the VCB framework to work you either have to run it in a command window with the “run as administrator” option,  or turn off UAC on the server.  The former can be a little tricky to accomplish if you’re wanting to run the framework from inside a backup application, while the latter seems to be the most common approach.

That’s it.  Turn off UAC and reboot the computer.  Now VCB works great.

Why is my backup running slow?

Backup systems, while a necessary part of any well managed IT system, are often a large source of headaches for IT staff. One of the biggest issues with any back system is poor performance. It is often assumed that performance is related to the efficiency of the backup software or the performance capabilities of backup hardware. There are, however, many places within the entire backup infrastructure that could create a bottleneck.
Weekly and nightly backups tend to place a much higher load on systems than normal daily activities. For example a standard file server may access around 5% of its files during the course of a day but a full backup reads every file on the system. Backups put strain on all components of a system from the storage through the internal buses to the network. A weakness in any component along the path can cause performance problems. Starting with the backup client itself, let’s look at some of the issues which could impact backup performance.

  • File size and file system tuning
  • Small Files

A file system with many small files is generally slower to back up than one with the same amount of data in fewer large files. Generally systems with home directories and other shares which house user files will take longer to back up than database servers and systems with fewer large files. The primary reason for this is due to the overhead involved in opening and closing a file
In order to read a file the operating system must first acquire the proper locks then access the directory information to ascertain where the data is located on the physical disk. After the data is read, additional processing is required to release those locks and close the file. If the amount of time required to read on block of data is x, then it is a minimum of 2-3x to perform the open operations and x to perform the close. The best case scenario, therefore, would require 4x to open, read and close a 1 block file. A 100 block file would require 103x. A file system with a 4 100 block files will require around 412x to back up. The same amount of data stored in 400 1 block files would require 1600x or about 4 times as much time.

So, what is the solution? Multiple strategies exist which can help alleviate the situation.
The use of synthetic full backups only copies the changed files from the client to the backup server (as with an incremental backup) and a new full is generated on the backup server from the previous full backup and the subsequent incrementals. A synthetic full strategy at a minimum requires multiple tape drives and disk based backup is recommended. Adequate server I/O performance is a must as well since the creation of the synthetic full requires a large number of read and write operations.
Another strategy can be to use storage level snapshots to present the data to the backup server. The snapshot will relieve the load from the client but will not speed up the overall backup as the open/close overhead still exists. It just has been moved to a different system. Snapshots can also be problematic if the snapshot is not properly synchronized with the original server. Backup data can be corrupted if open files are included in the snapshot.
Some backup tools allow for block level backups of file systems. This removes the performance hit due to small files but requires a full file system recovery to another server in order to extract a single file.
Continuous Data Protection (CDP) is a method of writing the changes within a file system to another location either in real time or at regular, short intervals. CDP overcomes the small file issue by only copying the changed blocks but requires reasonable bandwidth and may put an additional load on the server.
Moving older, seldom accessed files to a different server via file system archiving tools will speed up the backup process while also reducing required investment in expensive infrastructure for unused data.

  • Fragmentation

A system with a lot of fragmentation can take longer to back up as well. If large files are broken up into small pieces a read of that file will require multiple seek operations as opposed to a sequential operation if the file has no fragmentation.
File systems with a large amount of fragmentation should regularly utilize some sort of de-fragmentation process which can impact both system and backup performance.

  • Client throughput

In some cases a client system may be perfectly suited for the application but not have adequate internal bandwidth for good backup performance. A backup operation requires a large amount of disk read operations which are passed along a system’s internal bus to the network interface card (NIC). Any slow device along the path from the storage itself, through the host bus adapter, the system’s backplane and the NIC can cause a bottleneck.
Short of replacing the client hardware the solution to this issue is to minimize the effect on the remainder of the backup infrastructure. Strategies such as backup to disk before copying to tape (D2D2T) or multiplexing limit the adverse effects of a slow backup on tape performance and life. In some cases a CDP strategy might be considered as well.

  • Network throughput

Network bandwidth and latency can also affect the performance of a backup system. A very common issue arises when either a client or media server has connected to the network but the automatic configuration has set the connection to a lower speed or incorrect duplex. Using 1Gb/sec hardware has no advantage when the port is incorrectly set to 10Mb/half duplex.
Remote sites can also cause problems as those sites often utilize much slower speeds than local connections. Synthetic full backups can alleviate the problem but if there is a high daily change rate may not be ideal. CDP is often a good fit, as long as the change rate does not exceed the available bandwidth. In many cases a remote media server with deduplicated disk replicated to the main site is the most efficient method for remote sites.

  • Media server throughput

Like each client system the media server can have internal bandwidth issues. When designing a backup solution be certain that systems used for backup servers have adequate performance characteristics to meet requirements. Often a site will choose an out of production server to become the backup system. While such systems usually meet the performance needs of a backup server, in many cases obsolete servers are not up to the task.
In some cases a single media server cannot provide adequate throughput to complete the backups within required windows. In these cases multiple media servers are recommended. Most enterprise class backup software allows for sharing of tape and disk media and can automatically load balance between media servers. In such cases multiple media servers allow for both performance and availability advantages.

  • Storage network

When designing the Storage Area Network (SAN) be certain that the link bandwidth matches the requirements of attached devices. A single LTO-4 tap drive writes data at 120MB/sec. In network bandwidth terms this is equivalent to 1.2Gb/sec. If this tape drive is connected to an older 1Gb SAN, the network will not be able to write at tape speeds. In many cases multiple drives are connected to a single Fibre Channel link. This is not an issue if the link allows for at least the bandwidth of the total of the connected devices. The rule of thumb for modern LTO devices and 4Gb Fibre Channel is to put no more than 4 LTO-3 and no more than 2 LTO-4 drives on a single link.
For disk based backup media, be certain that the underlying network infrastructure (LAN for network attached or iSCSI disk and SAN for Fibre Channel) can support the required bandwidth. If a network attached disk system can handle 400MB/sec writes but is connected to a single 1Gb/sec LAN it will only be able to write up to the network speed, 100MB./sec. In such a case, 4 separate 1Gb connections will be required to meet the disk system’s capabilities.

  • Storage devices

The final stage of any backup is the write of data to the backup device. While these devices are usually not the source of performance problems there may be some areas of concern. When analyzing a backup system for performance, be sure to take into account the capabilities of the target devices. A backup system with 1Gb throughput throughout the system with a single LTO-1 target will never exceed the 15MB/sec (150Mb/sec) bandwidth of that device.

  • Disk

For disk systems the biggest performance issues is the write capability of each individual disk and the number of disks (spindles) within the system. A single SATA disk can write between 75 and 100 MB/sec. An array with 10 SATA drives can, therefore, expect to be able to write between 750MB/sec and 1GB/sec. RAID processing overhead and inline deduplication processing will limit the speed so except the real performance to be somewhat lower, as much as 50% less than the raw disk performance depending on the specific system involved. When deciding on a disk subsystem, be sure to evaluate the manufacturer’s performance specifications.

  • Tape

With modern high speed tape subsystems the biggest problem is not exceeding the device’s capability but not meeting the write speed. A tape device performs best when the tape is passing the heads at full speed. If data is not streamed to the tape device at a sufficient rate to continuously write, the tape will have to stop while the drive’s buffer is filled with enough data to perform the next write. In order to get up to speed, the tape must rewind a small amount and then restart. Such activity is referred to as “shoe shining” and drastically reduces the life of both the tape and the drive.
Techniques such as multiplexing (intermingling backup data from multiple clients) can alleviate the problem but be certain that the last, slow client is not still trickling data to the tape after all other backup jobs have completed. In most cases D2D2T is the best solution, provided that the disk can be read fast enough to meet the tape’s requirements.

  • Conclusion

In most backup systems there are multiple components which cause performance issues. Be certain to investigate each stage of the backup process and analyze all potential causes of poor performance.