Export SPCollect Logs from VNX Unisphere GUI

When working with EMC support, it may become necessary to export and upload SPCollect logs from your array and send to the EMC support team. Below is an easy way to obtain the requested SPCollect information.

  • Login to the GUI as an Administrator > Navigate to System
  • On the Panel (left or right hand side) click > “Generate Diagnostic Files – SPA” and “Generate Diagnostic Files – SPB”


  • You will immediately see a pop-up message with “Success”


  • Wait about 5 minutes
  • Click “Get Diagnostic Files – SPA”
  • You will see a file named:
  • [SystemSerialNumber]_SPA_[Date]_[randombits]_data.zip
  • The file should be around 15-20MB
  • If you file is smaller, you haven’t waited long enough for the correct file to be generated
  • Highlight the file and click “Transfer” to a destination you choose in the Transfer Manager window


  • Repeat the steps for SP B

Due to file size, most email systems will not allow the .zip files to be sent. Login to the EMC support site and attach the files to your specific case.

Kaspersky PURE 3.0 Stuck Updating

If Kaspersky PURE 3.0 is stuck updating (22%ish) from version to and the log shows the file is stuck downloading on “stpass.exe.ap” (16mb)

Kasperksy Update Error

You are more than likely on a slow internet connection and your download is timing out.

To fix this issue, change your Update Source from the http default to ftp://ftp.kaspersky.com

This change could possibly work for other Kaspersky clients as there a variety of client updates listed in ftp://ftp.kaspersky.com

Media Agent Networking

I get a lot of questions about the best way to configure networking for backup media agents or media servers in order to get the best throughput.    I thought a discussion of how the networking (and link aggregation) works would help shed some light.

Client to Media Agent:
In general we consider the media agents to be the ‘sink’ for data flows during backup from clients.  This data flow originates (typically) from many clients destined for a single media agent.   Environments with multiple media agents can be thought of as multiple single-agent configs.

The nature of this is that we have many flows from many sources destined for a single sink.  It is important then if we want to utilize multiple network interfaces on the sink (media agent) that the switch to which it is attached be able to distribute the data across the multiple interfaces.  By definition then we must be in a switch-assisted network link aggregation senario.    Meaning that the switch must be configured to utilize either LACP or similar protocols.   The server must also be configured to utilize the same methods of teaming.

Why can’t we use adaptive load balancing (ALB) or other non-switch assisted methods?  This issue is that the decision of which member of a link-aggregation-group a packet is transmitted over is made by the device transmitting the packet.  In the scenario above the bulk of the data is being transmitted from the switch to the media agent, therefore the switch must be configured to support spreading the traffic across multiple physical ports.  ALB and other non-switch –assisted aggregation methods will not allow the switch to do this and will therefore result in the switch using only one member of the  aggregation group to send data.  Net result begin that the total throughput is restricted to that of a single link.

So, if you want to bond multiple 1GbE interfaces to support traffic from your clients to the media agent the use of LACP or similar switch assisted link aggregation is critical.

Media Agent to IP Storage:
Now from the media agent to storage we consider that most traffic will originate to the media agent and be destined for the storage.  Really not much in the way of many-to-one or one-to-many relationships here it’s all one-to-one.  First question is always “will LACP or ALB help?”  the answer is probably no.  Why is that?

First understand that the media agent is typically connected to a switch, and the storage is typically attached to the same or another switch.  Therefore we have two hops we need to address MA to switch and switch to storage.

ALB does a very nice job of spreading transmitted packets from the MA to the switch across multiple physical ports.  Unfortunately all of these packets are destined for the same IP and MAC address (the storage).  So while they packets are received by the switch on multiple physical ports they are all going to go to the same destination and thus leave the switch on the same port.   If the MA is attached via 1GbE and the storage via 10GbE this may be fine.  If it’s 1GbE down to the storage then the bandwidth will be limited to that.

But didn’t I just say in the client section that LACP (switch assisted aggregation) would address this?  Yes and no.  LACP can spread traffic across multiple links even if it has the same destination, but only  if it comes from multiple sources.  The reason is that LACP uses either an IP or MAC based hash algorithm to decided which member of a aggregation group a packet should be transmitted on.  That means that all packets originating from MAC address X and going to MAC address Y will always go down the same group member.  Same is true for source IP X and destination IP Y.   This means that while LACP may help balance traffic from multiple hosts going to the same storage, it can’t solve the problem of a single host going to a single storage target.

By the way, this is a big part of the reason we don’t see many iSCSI storage vendors using a single IP for their arrays.  By giving the arrays multiple IP’s it becomes possible to spread the network traffic across multiple physical switch ports and network ports on the array.  Combine that with using multiple IP’s on the media agent host and multi-path IO (MPIO) software and now the host can talk to the array across all combinations of source and destination IPs (and thus physical ports) and fully utilize all the available bandwidth.

MPIO works great for iSCSI block storage.  What about CIFS (or NFS) based storage?   Unfortunately MPIO sits down low in the storage stack, and isn’t part of the network filing (requester) stack used by CIFS and NFS.  Which means that MPIO can’t help.    Worse with the NFS and CIFS protocols the target storage is always defined by an IP address or DNS name.  So having multiple IP’s on the array in and of itself doesn’t help either.

So what can we do for CIFS (or NFS)?  Well, if you create multiple share points (shares) on the storage, and bind each to a separate IP address you can create a situation where each share has isolated bandwidth.  And by accessing the shares in parallel you can aggregate that bandwidth (between the switch and the storage).  To aggregate between the host and switch you must force traffic to originate from specific IP’s or use LACP to spread the traffic across multiple host interfaces.  You could simulate MPIO type behavior by using routing tables to map a host IP to an array IP one-to-one.    It can be done but there is no ‘easy’ button.

So as we wrap this up what do I recommend for media agent networking?   And IP storage?
On the front end – aggregate interfaces with LACP.
On the back end – use iSCSI and MPIO rather than CIFS/NFS.  Or use 10GbE if you want/need CIFS/NFS

Windows Server 2012 Licensing – a quick reminder

This came up recently for a customer and while it’s not new news, I thought a quick reminder would be useful. There are a few key points to remember about licensing of Windows Server 2012 in server virtualization projects, these rules apply to XenServer, VMware, Hyper-V, Oracle VM, etc.:

  • Licenses are applied to physical servers, never to virtual machines. If you are thinking about how you need a license for the VM you are about to build, you’re probably doing something wrong
  • There is feature parity between Standard and Datacenter editions, Enterprise Ed has been dropped
    • The only difference between these 2 major editions is in the number of virtual OSE’s (operating system environments, aka a virtual machine) granted with the license
    • A license covers 2 processor sockets within 1 server, 1 license cannot be purchased to cover 2 servers each containing 1 populated processor
    • The license allows for one bare-metal install of the operating system, but doesn’t require it – as would be the case if your hypervisor is anything other than Hyper-V
    • Virtual OSE grants by edition:
      • Standard: 2 virtual OSE’s per license
      • Datacenter: unlimited OSE’s per license
  • More than 1 license of the same edition may be applied to a given physical server to cover additional CPU sockets or additional virtual machines
    • 2 Standard Edition licenses would cover 4 processor sockets and/or up to 4 VM’s
    • 2 Datacenter Edition licenses would cover 4 processor sockets and two * unlimited for the number of VM’s ..that’s like beyond infinity, but 4 CPU sockets.
  • The license cannot be transferred more than once every 90 days – yeah, you read that right. This rule is to prevent a license from jumping from one host to another to follow live migration activities
    • This is where most people pause and say “oh..”. That tells me they were purchasing 1 license per VM and just thinking the license moves around with the VM
    • You need to cover the high water mark of virtual OSE’s for a given host
  • Licensing math:
    • Standard Ed. list pricing is $882
    • Datacenter Ed. list pricing is $4809
    • The break-even point for Datacenter is at 5.45 Standard licenses; in effect, for a density of more than 10 VM’s (5 std licenses each granting 2 OSE’s), you should use a Datacenter Edition license
  • A real world example: New virtualization customer deploying 3 VMware hosts
    • We generally size the environment for N+1, meaning we’re planning that 1 of the servers is a “spare” from the perspective of workload sizing – so all the workload can run on just 2 servers; we’re planning for this and so should you in your licensing.
    • If you plan to run more than 20 total VM’s in this environment, you need 3 Datacenter Edition licenses
      • 20 VM’s running on 2 servers = 10 VM’s/server
      • 10 VM’s requires 5 Standard Edition licenses to have enough OSE grants
      • More than 10 per server, and it’s now cheaper to have just bought a single Datacenter Edition license
        • 6 * $882 = $5292, which is greater than $4809 for datacenter
      • Since you don’t know which host (think of a rolling patching cycle) is going to carry the increase load, all the hosts in the environment should be licensed uniformly to this high water mark
    • Depending on the licensing model, an upgrade from 5 * Standard Edition licenses to a single Datacenter Edition license may not be possible – plan ahead!
    • If you have OEM licenses that came with your old physical server environment, these are likely not transferrable – they don’t follow the P2V action
  • With this understanding, while you might have some work to do upfront (or scrambling to get back into compliance now) the long term savings are very real for dense virtualization projects that can leverage the Datacenter Edition license. On a modern 2 socket server with 16 cores/32 threads, 10 VM or greater density is easily achievable

General licensing FAQ:

Licensing brief for virtualized environments:


Asigra Linux Restore with ‘sudo’

Conduct an Asigra restore to a UNIX or Linux server using sudo credentials

Verify that user is listed in /etc/sudoers file on restore target system


The sudo utility allows users root level access to some (or all) subsystems without requiring users to know the root password. Please look at documentation for the sudo utility for more information.

From Asigra restore dialog, choose files to be restored


Select Alternate location and click on ‘>>’


Enter server name or IP address for restore target and check both “Ask for credentials” and “‘sudo’ as alternate user’


Enter username and password for user configured in /etc/sudoers file


Enter “root” and same password as in previous step


Do NOT enter the ‘root’ password. The sudo utility uses the regular user’s password.

Select restore location and truncate path, if required


Accept defaults


Restore in progress…


Verify restore completed


Beware of the Invisible Installs!!!

Does your internet toolbar look like this? :


Many times advertising companies will seek out software companies to promote their product using software companies many people are familiar with.

Ever end up with a program you don’t use and are not sure how it was installed? Each time you go to install a program you need to use, pay attention to the terms and agreements. Some software downloads include extra additions you may not even be aware of, need or want. Some of these options will even be hidden throughout the install. For example, when going to install Adobe Reader you will see this:


If you would like to use Google Chrome, then you can leave the option checked. If not, make sure to uncheck the option.  Some installs can even change your settings:


So next time you need to install a program, please pay attention to each and every step so that you ensure you get just what you need.

BIOS Settings for Hyper-V Role in Windows 8 on Lenovo W-Series


Recently I upgraded to Windows 8 on my Lenovo W510 in order to setup a virtual lab in Hyper-V. Hoping to save others the frustration I experienced during BIOS configuration, I thought I’d share the Intel hardware virtualization settings necessary for the role. The order that settings are made and complete power downs after certain settings changes are significant. Don’t save time with warm boots!

Step 1. Boot the machine, press F1 to enter setup, and you’ll be presented with this menu.  Make sure that the BIOS is the most recent version (1.45 as of this post).  Press enter on Config.

BIOS top level menu

BIOS top level menu

Step 2. In Config menu, arrow down to CPU and press enter.

Config Menu on Lenovo W510 BIOS

Config Menu on Lenovo W510 BIOS

Step 3. In the CPU menu, make sure the settings are:
• Intel Hyper-Threading = Enabled
• Intel Virtualization Technology = Enabled
• Intel VT-d Feature = Enabled

Core Multi-Processing Enabled, Intel Hyper-Threading Technology Enabled, Intel Virtualization Technology Enabled, Intel VT-d Feature Enabled

Hardware Virtualization BIOS Settings on Lenovo W510

If any settings in Step 3 had to be changed, hit F10 to save the settings and then power the machine off. Re-enter the BIOS by pressing F1 on the next startup.

Step 4. Return to the Main Menu in Step 1, and select Security. This menu will appear.
Arrow down to Memory Protection and press enter.

Security Menu on Lenovo W510

Security Menu on Lenovo W510

Step 5. In Memory Protection, make sure Execution Prevention is set to Enabled
Press ESC to return to the Security menu from Step 4

Execution Prevention Enabled

Memory Protection BIOS Settings on Lenovo W510

Step 6. Confirm the following settings:
• Security Chip = Active
• Intel TXT Feature = Disabled

Security Chip Active, Intel TXT Feature ***Disabled***

Security Chip BIOS Settings on Lenovo W510

Press F10 to save settings, and power down the machine. After restart, the Hyper-V role can be installed.