Media Agent Networking

I get a lot of questions about the best way to configure networking for backup media agents or media servers in order to get the best throughput.    I thought a discussion of how the networking (and link aggregation) works would help shed some light.

Client to Media Agent:
In general we consider the media agents to be the ‘sink’ for data flows during backup from clients.  This data flow originates (typically) from many clients destined for a single media agent.   Environments with multiple media agents can be thought of as multiple single-agent configs.

The nature of this is that we have many flows from many sources destined for a single sink.  It is important then if we want to utilize multiple network interfaces on the sink (media agent) that the switch to which it is attached be able to distribute the data across the multiple interfaces.  By definition then we must be in a switch-assisted network link aggregation senario.    Meaning that the switch must be configured to utilize either LACP or similar protocols.   The server must also be configured to utilize the same methods of teaming.

Why can’t we use adaptive load balancing (ALB) or other non-switch assisted methods?  This issue is that the decision of which member of a link-aggregation-group a packet is transmitted over is made by the device transmitting the packet.  In the scenario above the bulk of the data is being transmitted from the switch to the media agent, therefore the switch must be configured to support spreading the traffic across multiple physical ports.  ALB and other non-switch –assisted aggregation methods will not allow the switch to do this and will therefore result in the switch using only one member of the  aggregation group to send data.  Net result begin that the total throughput is restricted to that of a single link.

So, if you want to bond multiple 1GbE interfaces to support traffic from your clients to the media agent the use of LACP or similar switch assisted link aggregation is critical.

Media Agent to IP Storage:
Now from the media agent to storage we consider that most traffic will originate to the media agent and be destined for the storage.  Really not much in the way of many-to-one or one-to-many relationships here it’s all one-to-one.  First question is always “will LACP or ALB help?”  the answer is probably no.  Why is that?

First understand that the media agent is typically connected to a switch, and the storage is typically attached to the same or another switch.  Therefore we have two hops we need to address MA to switch and switch to storage.

ALB does a very nice job of spreading transmitted packets from the MA to the switch across multiple physical ports.  Unfortunately all of these packets are destined for the same IP and MAC address (the storage).  So while they packets are received by the switch on multiple physical ports they are all going to go to the same destination and thus leave the switch on the same port.   If the MA is attached via 1GbE and the storage via 10GbE this may be fine.  If it’s 1GbE down to the storage then the bandwidth will be limited to that.

But didn’t I just say in the client section that LACP (switch assisted aggregation) would address this?  Yes and no.  LACP can spread traffic across multiple links even if it has the same destination, but only  if it comes from multiple sources.  The reason is that LACP uses either an IP or MAC based hash algorithm to decided which member of a aggregation group a packet should be transmitted on.  That means that all packets originating from MAC address X and going to MAC address Y will always go down the same group member.  Same is true for source IP X and destination IP Y.   This means that while LACP may help balance traffic from multiple hosts going to the same storage, it can’t solve the problem of a single host going to a single storage target.

By the way, this is a big part of the reason we don’t see many iSCSI storage vendors using a single IP for their arrays.  By giving the arrays multiple IP’s it becomes possible to spread the network traffic across multiple physical switch ports and network ports on the array.  Combine that with using multiple IP’s on the media agent host and multi-path IO (MPIO) software and now the host can talk to the array across all combinations of source and destination IPs (and thus physical ports) and fully utilize all the available bandwidth.

MPIO works great for iSCSI block storage.  What about CIFS (or NFS) based storage?   Unfortunately MPIO sits down low in the storage stack, and isn’t part of the network filing (requester) stack used by CIFS and NFS.  Which means that MPIO can’t help.    Worse with the NFS and CIFS protocols the target storage is always defined by an IP address or DNS name.  So having multiple IP’s on the array in and of itself doesn’t help either.

So what can we do for CIFS (or NFS)?  Well, if you create multiple share points (shares) on the storage, and bind each to a separate IP address you can create a situation where each share has isolated bandwidth.  And by accessing the shares in parallel you can aggregate that bandwidth (between the switch and the storage).  To aggregate between the host and switch you must force traffic to originate from specific IP’s or use LACP to spread the traffic across multiple host interfaces.  You could simulate MPIO type behavior by using routing tables to map a host IP to an array IP one-to-one.    It can be done but there is no ‘easy’ button.

So as we wrap this up what do I recommend for media agent networking?   And IP storage?
On the front end – aggregate interfaces with LACP.
On the back end – use iSCSI and MPIO rather than CIFS/NFS.  Or use 10GbE if you want/need CIFS/NFS

Sizing a Tape Library

So it seems like an easy question – how do I decide how large a tape library I need? It’s one I get a lot so I thought I’d devote a few minutes to the topic.

Lets assume I have 30TB of data in my datacenter, and I’m going to do a full backup once per week.  We’re also going to assume that my daily incrementals are 5% of the size of my full (1.5TB) and that I write all of those to tape as well.

Now, let’s assume (like many of my customers, and my own shop years back) that I want to ship tapes offsite twice a week, say on Tuesday and Friday.  And that full backups are all staged to run over the weekend.  I do not want to keep any tapes in the library which contain data when I ship.

Based on this we can compute the amount of data I need to ship on Tuesday and Friday; and how fast I need to write data to tape in order to be ready to ship it.

On Tuesday I want to ship 1 full backup (30TB) + 3 incrementals (1.5TB x 3); a total of 34.5TB of data.   On Friday I’m shipping just 3 incrementals (4.5TB).

For Tuesday’s shipment we have a window which probably starts Saturday morning (say 6am) and runs till Tuesday morning (say 6am) to complete the tape writing.  This window is 72 hours in length, and will thus require a rate of just under 500GB/hour to complete the tape out.

For Friday’s shipment we have a window which could start at say 6pm Tuesday and needs to complete by 6am Friday.  This window is 60 hours, and represents a minimum throughput of 0.075GB/hour.

Based on this we’re probably not too concerned about the Friday shipment, and we’ll concentrate on making things happen for Tuesday.

First – how many drives do I need?

Well – do I need backward compatibility with older tape?  LTO can read back two generations and write back one.  So an LTO6 drive can read LTO4 and write LTO5.  If I need to read my old LTO2 tapes, then I need to restrict myself to deploying LTO4 drives.

LTO6 can write data at a rate of 560B/hour before compression, LTO5 writes at 490GB/hour, and LTO4 writes at 420GB/hour.  If you’re working with something older than that you’ll have do the math yourself.

Amount of Data / Backup Window Size / Tape Drive Throughput = Number of Drives (round up)

So for the 500GB/hour target I’d need 2 LTO4 drives, 2 LTO5 drives, or 1 LTO6 drive.  I might want to consider adding a drive or two as ‘spares’ in case one breaks or is in use for (gasp) data restore operations.

Next up – how many slots do I need?

LTO6 holds 2.5TB on a tape, LTO5 holds 1.5TB, and LTO4 holds 0.8TB.  Again if you’re older than that you’ll need to look up the math. Manufactures will also quote compressed capacities which are roughly 2x native.  I find that while data will compress some, 1.5x is probably more realistic.  I’m going to use that assumption in my calculations below.

Size of Data / Compression Factor / Tape Capacity = Number of tapes. (round up)

34.5TB is going to occupy 10 LTO6 tapes, 16 LTO5 tapes, or 29 LTO4 tapes.

At this point we know enough to size the library.

For LTO6 I need a library with 10 slots, and 1 drive.

For LTO5 I need a library with 16 slots and 2 drives.

For LTO4 I need a library with 29 slots and 2 drives.

Personally I like to add at least 10-15% to my slots and (as noted earlier) and extra drive or two. This provides headroom for growth and some inefficient tape use.

Based on this it seems that a library with 12-14 slots and 2 LTO6 drives would work well.  Maybe one with 24 slots and 3 LTO5 drives.

A last factor to consider (not really sizing) is how tapes will be loaded and unloaded from the library. If I’m going to pull 30 tapes at a time, I really don’t want a library with only a single I/E slot. Ideally a slot count equal to the number of tapes expected to be unloaded at a time, but at a minimum one which helps to minimize the number of “return trips” to the library for each unload/load session.

That covers the scenario I described at the beginning.  I know some will ask about keeping tapes in the library.  That’s also something you can calculate based on the total amount of data (Number of Fulls * Size of Full + Number of Incrementals * Size of Incremental) you want to keep and dividing by the size of the tape.  That gives you a minimum slot count.   For this scenario I’d add about 25% to the number of slots for growth and “partial tapes”.     The good news is that if you’re keeping your tapes in the library, then you don’t have to worry about IE ports.

With that I know pretty well how big my library must be.  Now I can go shopping and find a device I like.

VMworld Wrapup

It’s been an exciting week in San Francisco hearing about the latest and greatest from VMware and partners.  I’m going to try to capture some of the highlights from the show and my thoughts about what we’re going to be seeing coming down the the pipe.

Software Defined Data Center (SDDC) – SDDC was the overriding topic and concept in San Francisco this week.  The idea that add things in your datacenter – Storage, Networking, Compute should be abstracted from the “things” that they are and be managed as logical entities to enable flexibility, consolidation, and agility.  This is a huge concept but one VMware sees as the next leap for enterprise and service provider datacenters.  With vSphere and vCloud Director VMware has a good start around the compute side of SDDC and at this show they introduced their vision for software defined networking (SDN) and software defined storage (SDS).

Software Defined Networking (SDN) – SDN isn’t really a new concept, there has been discussion of things like VXLAN for a while now.  VMware introduced their vision for SDN at the show in the form of the NSX platform, along with a laundry list of networking partners who are supporting the platform.  Citrix even announced support of NSX with their NetScaler Controller for NSX.

Software Defined Storage (SDS) – SDS had a handful of interesting announcements this week, but not (yet) any shipping product.   Where VMware is going with this is the idea that storage is configured into the environment and self-describes it’s capabilities (performance, replication, snapshots, etc..).  When a virtual machine is created administrators will tag it with information which describe it’s requirements and based on an engine vSphere will select appropriate storage options for placement of the VM.  As the storage is reconfigured vShphere will detect changes; If a VM’s needs are changed vSphere will detect that as well and in both cases the environment will react accordingly to make sure that the needs of each VM are met.

SDS will see it’s first real products in the form of vSphere vFlash (an SSD based read cache) and VMware Virtual SAN (vSAN).  With each of these products you’ll see the per-VM polices being applied.  vFlash uses SSD drives installed in the individual hosts to provide high-performance read caching of VM data, and will be available as an enterprise plus edition feature in vSphere 5.5. vSAN is a local storage based hybrid distributed storage technology leveraging SSD and SAS/SATA storage in ESXi hosts to provide high performance and low cost storage for virtual machines.  vSAN is expected to be available somewhere in the first half of 2014.  Additional time was given to the concept of Virtual Volumes (vVols) wherein storage arrays will integrate directly with vSphere without the intermediate layer of LUNs and filesystems.  Virtual disks will be provisioned directly to array storage based on the requirements of the VM.  Like NSX, vVols were introduced with a lengthy list of partners who are activly working with VMware to define and bring this technology to market.

The final dimension of the SDDC will be management, delivered in the form of the vCenter suite of products (vCOPS, vCD, vCAC, vCOM).   With these tools to monitor the virtualized environment and ensure that resources are used efficiently enterprises will be able to ensure that they maximize the value of their infrastructure investments.

Today much of this discussion is largely vision based, and not yet product.  Actual product announcements for shipping code were actually pretty incremental but the changes coming in the future will be dramatic.  It’s going to be important to consider how investments made today will support the SDDC of the future.

Stay tuned, lots of big thins coming from VMware!

Citrix is all new in June

If you’ve been paying attention to Twitter lately, you’ve probably noticed that there have been a lot of new announcements and releases from Citrix over the past 7 days.   So many in fact it can be difficult to keep straight exactly what is going on.  I’m going to try to clear up some of the murk and hopefully help you understand how these announcements are going to impact your plans for the near future. I’ll try to detail each of the announcements and product updates and what’s new with them.

XenDesktop 7: This is Citrix’s flagship VDI product, which competes head to head with VMware’s Horizon View.   Hopefully most Citrix customers are also aware that most of the license editions for XenDesktop also include rights to Citrix XenApp (also knows as Presentation Server or MetaFrame).  Despite the bundling, XenApp and XenDesktop have always been two distict products with separate infrastructures and management frameworks.  XenDesktop 7 changes all that.  With the v7 release XenDesktop now fully encompasses all the functionality for application and desktop publishing from both server OS (XenApp/RDS – aka Hosted Shared) as well as desktop OS (XenDestkop/VDI – aka Hosted).  This means that from a single console you can configure desktops and apps published from Windows XP, 7, 8, Server 2008R2 and Server 2012.  Yes, I said desktops and apps!  Actually XenDesktop has had the ability to do “VM Hosted Apps” for a while but it was infrequently used; that capability is now core functionality and delivers the “seamless” published apps from both destkop and server environments.

Did I mention this is all in a single console?  Well, actually there are two consoles – the management/configuration interface which is now named “Studio” and a helpdesk and monitoring interface named “Director”.  XenDesktop admins will be familiar with both of these.  By the way, Director now has the ability to mine Edgesight data to provide historical information about users, apps, sessions, and hosts.

With the merger there is now a 4th edition of XenDesktop – now giving us Platinum, Enterprise, VDI, and Apps.  The Apps edition will map to the functionality which was previously provided by XenApp.

XenDesktop 7 also brings a host of new features and functionality including the H.264 supercodec, reverse seamless applications, and App DNA integration.  RemotePC is now configured from within the Studio console.   One of the more interesting capabilities is that you can now use MCS to manage your published app server farms which will greatly simplify single image management for smaller environments. Check out this blog for more details and a link to the Citrix TV session detailing the new features.

XenDesktop 7 brings with it a host of other updates:

  • StoreFront 1.2 -> StoreFront 2.0
  • Web Interface 5.4 -> StoreFront 2.0 (StoreFront is now required)
  • Provisioning Services 6.1 -> Provisioning Services 7.0
  • XenServer 6.1 -> XenServer 6.2
  • Receiver 3.4 -> 4.0  (and new receivers for iOS, Android, and OSX too)

It’s a pretty safe bet that if you use XenDesktop or XenApp you’ve got some new code in your future.

XenApp 6.5 Feature Pack 2: Much less hubbub about 6.5 FP2, but very noteworthy that in this same timeframe Citirx has chosen to issue an update to the existing XenApp product which offers many of the end-user benefits associated with XenDesktop 7.  This appears to be a recognition on Citrix’s part that customers probably will not migrate off of XenApp 6.5 in any great hurry, and this update removes much of the need.  XenApp 6.5 was originally released in August of 2011 and is widely deployed.  Details of the new features can be found here.

Cloudgateway is now XenMobile Apps: So if you’re looking for an updated App Controller, you need to look in a new place.  This heralds future integration between the XenMobile MDM solution and Citrix’s Web/SaaS/Mobile Application management.  We also saw a new release of XenMobile MDM 8.5 on June 28.

ShareFile Storage Center and Connectors are now Storage Controller 2.0: This brings the integration of the on-prem storage options for ShareFile all into one product, reducing the number of servers needed to connect to local storage zones, CIFS shares, and SharePoint.  It also provides read/write access to SharePoint sites!

XenServer 6.2: The latest release of Citrix’s XenServer hypervisor is more incremetnal and has not received much fanfare, with the largest announcement being that the product is now fully open source.  More details on the future strategy and new features can be found here.

NetScaler 10.1: It seems like this release has been kept fairly quiet, however the new HDX Insight reporting feature will offer great value to shops using NetScaler for its Access Gateway Enterprise Edition features.  Want to know how much data user sessions are moving?  Look no further!

VDI in a Box: Even VDI in a Box got an update, now at version 5.3. ViaB gets updates to support better 3D graphics. newer hypervisors, the H.264 supercodec, Windows 8 and Personal vDisk.  More info can be found here.

So June has been a huge month for Citirx with updates across nearly the entire product portfolio.  If you have or use Citrix products these changes will affect you.  If you need help or just want more information reach out to your Lewan Account Executive.  We’re here to help.

Simpana Virtualize Me (CommVault P2V Conversion w/1-Touch)

 

I recently wrote an article on using 1-Touch to restore a physical machine into a VM.  Essentially performing a P2V conversion as part of a system recovery. It’s since been called to my attention that there can be an easier way.

Simpana Service Pack 4 introduced a new feature combining functions of 1-Touch with a P2V tool. The end result is that it can be very easy to restore a failed physical machine into a VMware VM. A word of caution here, the functionality requires that 1-Touch is installed from the SP4 install CD set. If your 1-Touch install is from an earlier set you must uninstall and re-install using the SP4 media (no you can’t just patch to SP4).

You must also be using boot .ISO files generated from the above 1-Touch installation, and copy them onto a datastore on your ESXi Servers.  You must also have backed up the system you want to restore in it’s entirety so that there is a complete operating system to restore.

With those caveats stated, let’s talk about using the new “Virtulize Me” feature.

Start Virtualize Me Wizard

media_1323058613173.png

Select the server to create a VM from, right click and select Virtualize Me.

Specify the Destination

media_1323061435107.png

Select your vCenter server, once selected from your installed Virtual Server Agent instances you can browse for the datastore and boot .ISO file.

media_1323662650196.png

Enter the information about where to create the new VM. You may also want to investigate the setting under ‘Advanced’ to resize the VM to be something different than the original was.

If you have Virtual Server agents installed you should be able to browse for all the required information.

Select “Immediate” to run this job now, then click Ok.

media_1323662722829.png

The job controller will show that the job has been submitted.

media_1323746204210.png

On your vCenter server you can see that the VM has been created, and is powered on.

media_1323746214251.png

Opening the console of the VM you can observe the progress of the restore job.

media_1323746221700.png

You can also watch the progress of the restore from the CommCell Console.

media_1323746228688.png

After the restore process completes, the VM will reboot. It’s going to do some setup (courtesy of the sysprep mini-setup).

media_1323746235204.png

When faced with the login error, go ahead an login as the local administrator. Don’t panic though when it flashes a sysprep dialog and logs you out … this is expected.

media_1323746244402.png

The mini-setup will continue on the next boot.

media_1323746252541.png

Login again when prompted.

media_1323746260878.png

After the login you should find yourself on a functioning, restored system. Note the AD Domain membership.

At this point you probably want to do some additional cleanup, install VMware tools, etc – but the restore/P2V process is complete.

 

CommVault 1-Touch Bare Metal Restore to VMware VM

I was working with a customer recently who wanted to configure 1-Touch to be used for bare metal recovery for some older servers into their VMware environment. After working through the process I thought it would be best if I took a few minutes and documented the process that we’d used as well as a couple tips that will make it easier for folks in the future.

In this example we will be restoring a backup from a physical machine into a VM using the 1-Touch boot CD in offline mode. The original machine was Windows Server 2008 R2, thus we will be using the 64-bit 1-Touch CD.

This section assumes that 1-Touch has already been installed and the 1-Touch CD’s generated and available to the VMware infrastructure as .ISO files.

Building the Recovery VM

wpid1572-media_1323041407452.png

Generally you build this like you would build any other VM, but there are a couple things we want to pay attention to. First is the network adapter.

While in general the VMXNET3 adapter is preferred for VMware VMs, in this instance we want to specify the E1000 adapter because drivers for this are embedded in the Windows distribution, and thus in the 1-Touch recovery disk. You can change the adapter on the VM later if you’re so inclined, but using the E1000 for the restore will make things easier.

wpid1573-media_1323041557045.png

The second item is the SCSI controller. Again, while the SAS adapter is generally preferable, our restore is going to be easier if we select the LSI Logic Parallel adapter (for reasons we’ll discuss in a few minutes). Note this can be made to work with the SAS adapter, but it’ll require more editing of files, and like the network adapter you can change it later after the OS is installed so this makes it easier.

wpid1574-media_1323041688421.png

Pay attention to the creation of Disks. You need the same number and approximate size (larger, not smaller) as the original machine. It is possible to do a dissimilar volume configuration, but you’ll spend more time fiddling, and it will still insist that all volumes be at least as big as the original.

wpid1575-media_1323041980699.png

Power on the VM and mount the appropriate 1-Touch iso. In this case we’re using the 1Touch_X64.iso file

1-Touch Recovery

wpid1576-media_1323042223956.png

Booting the VM from the 1-Touch CD will start the interactive process. When prompted pick your favorite language and then click ok.

wpid1577-media_1323042296398.png

Click next at the welcome screen. (note if you are asked about provding a configuration file, specify ‘cancel’ to go into the interactive mode).

wpid1578-media_1323042344582.png

Verify that your disks have all been detected, then select yes and then click next.

wpid1579-media_1323042618944.png

Fill out the CommServe and client information, then click the “Get Clients” button to get the list of clients from the CommServe.

wpid1580-media_1323042817807.png

Select the client you want to recover, then click next.

wpid1581-media_1323042862700.png

Review the Summary then click next.

wpid1582-media_1323043025329.png

Select the backup set, and point in time for the recovery; then provide a CommCell username and password for the restore. Then click next.

wpid1583-media_1323043161561.png

You’ll see the ‘please wait while processing’ message … at this point you may want to watch a CommCell Console session.

wpid1584-media_1323043228923.png

You’ll see that a restore job has been created. You’ll need to watch this job for the restore to complete.

wpid1585-media_1323051629430.png

Now that the 1-Touch details have been restored we need to deal with disk mapping. In this case we will leave it with ‘similar disk mapping’ and disable the mini-setup (uncheck the box checked by default). Ok the warning about devices not matching (we’ll fix that later). Then click next.

wpid1586-media_1323051769461.png

Don’t exclude anything (unless you really know you want to). Click Next.

wpid1587-media_1323051813461.png

This is a review of the original disks. Click next.

wpid1588-media_1323051892760.png

Review or reconfigure the network binding correctly for this restore. Then click Ok.

wpid1589-media_1323052041995.png

Right click on the disk name and initialize if necessary. Then click Done.

wpid1590-media_1323052122700.png

Map the volumes to the disk by right clicking on the source, and mapping to the destination disk.

wpid1591-media_1323052173386.png
wpid1592-media_1323052220228.png

Repeat above for each volume. Once all volumes are mapped to the destination, click Ok.

The system will format the disks and start the restore.

wpid1593-media_1323052294387.png

You can also observe the restore progress again at the CommCell Console.

wpid1594-media_1323052525774.png

In general this is a good point to go get a cup of coffee, or lunch, or…whatever. This is the point where the entire backup set for the machine is going to be restored, so if the machine has any size to it this could take a while.

wpid1595-media_1323052803844.png

Time to map the drivers to the target system. CommVault does not automatically discover/remap these for you, so you need to tell the 1-Touch CD which drivers for LAN and Mass Storage Device (MSD) need to be used.

We know we need a Intel Pro1000 LAN, and a LSI Parallel SCSI (non-SAS) driver. Click on the browse buttons to the right, and look for the proper drivers under c:windowssystem32driverstorefilerepository

(note that clicking the “more” button will give a lot more detail to aid in finding the correct driver)

wpid1596-media_1323053013646.png

For reference, this is the directory containing the proper driver for the Pro1000 adapter we specified for the VM.

wpid1597-media_1323053063937.png

Select the .inf file (extension is suppressed). then click open.

wpid1598-media_1323053219097.png

Now we need to do the same for the Mass Storage Device. Looking at the detail behind the ‘more’ button will help us confirm that we need the LSI_SCSI device, and the PNP device ID’s that are expected. Make note of these ID’s, we’ll need them again in a minute. (might be worth copying them to the clipboard in the VM now).

Click the browse button and go find the LSI_SCSI driver.

wpid1599-media_1323053374320.png

This is the directory containing the LSI_SCSI driver. Browse into the directory.

wpid1601-media_1323053498084.png

If you try to just use the driver as-is, you’ll get the following error because the device IDs in the file don’t quite match close enough for 1-Touch’s satisfaction. To address this we need to edit the .inf file a little bit.

wpid1600-media_1323053435449.png

Right click the .inf file and select open with.

wpid1602-media_1323053581419.png

Accept the default of Notepad and click OK.

wpid1603-media_1323053638382.png

Scroll down to the section of the file which lists the device IDs. Unfortunately the IDs being requested by 1-Touch are longer than those in the file, so to make this happy we’ll add the extend ID’s that 1-Touch is looking for.

Below each section, paste in the ID’s you copied from the details window, and edit to match the line above.

wpid1604-media_1323053866559.png

The modified file is shown. Save the file, then select and ok.

As an aside, this is the reason we picked the LSI_SCSI for our restore rather than the LSI_SAS controller. The SAS driver has the same issue, but there are many (many) more IDs to be updated when using that driver. It’s easier to edit the simpler file, and then go back later and add a SAS based secondary disk to the VM, let windows auto-install the SAS driver. Once that’s done you can then change the adapter to SAS for the primary disk if you really want to be using the virtual SAS controller.

wpid1605-media_1323053921421.png

If the file was modified correctly, you can now click ok to continue.

wpid1606-media_1323053957526.png

The registry merge section here has to do with updating the drivers on the system. These changes are what we needed to map the drivers for.

wpid1607-media_1323054008668.png

Click Ok to the “restore completed successfully” dialog. The system will then reboot. This would be a really good time to eject the 1-Touch CD.

wpid1608-media_1323054078158.png

On reboot you may see this message. Don’t panic. Remember that at the time the backup you’re restoring was made, the server was powered on and running. This is ok, just start windows normally.

wpid1609-media_1323054219575.png

Be patient and let the machine boot up. This might take a bit, particularly if the original system had a lot of hardware management agents which will probably be none to happy in their new home. When the machine is ready go ahead and login. It might be best to use a local credential (rather than Domain).

Also don’t be surprised if you login and are immediately logged off – drivers are being discovered and installed at this point and the machine may want to reboot a time or two.

wpid1610-media_1323054487511.png

Before trying to fix the broken devices, this is a really good time to install VMware tools. After that you should be able to remove any broken devices from the restored system.

So, Install tools then clean-up any dead devices. Then uninstall any old hardware management stuff that doesn’t belong in a VM (some may need to be disabled if it won’t uninstall). This cleanup will vary from system to system.

That said, once the cleanup is done, you have recovered your physical system into a VM by way of the 1-Touch feature.