Commvault VSA job fails with “The option to backup only failed virtual machines was selected, but no failed virtual machines were discovered in the previous job”

Commvault introduced a neat little option called “Backup Failed VMs only (Virtual Server)” which can be found in the advanced settings of the schedule policy backup task.

I reconfigured the schedule policy according to the following strategy:
(i) Backup task 1: daily incremental @ 7PM CET
(ii) Backup task 2: weekly synthetic full @ 10AM CET (DASH full only; no backup)
(iii) Backup task 3: daily incremental for failed virtual machines only @ 6AM CET

Continue reading

Finding your HP Proliant serial number on a Windows machine

I used to collect the Proliant hardware serial numbers by using the Integrated LightsOut (ILO). Today I found a little vbs script on the HP website, which makes the task a bit easier and faster!

VBS script:

strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colBIOS = objWMIService.ExecQuery _
("Select * from Win32_BIOS")
For each objBIOS in colBIOS
Wscript.Echo "Manufacturer: " & objBIOS.Manufacturer
Wscript.Echo "Serial Number: " & objBIOS.SerialNumber

Thank you HP!

Lingo Explained: CPU-bound vs. Disk-bound storage arrays

Sometimes adding spindles is not enough to resolve a storage performance problem. Today – with high performing SSD disks – it’s getting more and more important to build a good understanding on the architecture of a storage arrays. Eventually this will help you to determine the caveats and will make you pose the good questions to get the maximum performance out of your IT infrastructure.

Some time ago, I got confronted with a storage performance problem. The disk array installed in an appliance was able to move 600GB/hour in 60% write and 40% read. When contacting the vendor, they advised me to install more spindles as this will distribute the load and I would gain performance. Upon validation of the system – storage array controller – specifications, it became clear this would not resolve the situation as the storage controller was already at maximum performance! This is a perfect example of a CPU-bounded system.

The goal of this blogpost is to define the difference between CPU-bound and Disk-bound. Continue reading

Configuring SNMP traps on Brocade SAN switches

SAN switches are the core components in your Storage Area Network. Therefore it’s important to monitor the devices correctly to ensure the operational continuity of your storage infrastructure.

The first approach is by implementing Brocade Network Advisor (BNA). BNA is a tool used to manage, monitor and report (performance -, throughput visualization and many more) on fiber channel SAN switches and comes with a cost (platform hosting the software + software licenses/support). Brocade Network Advisor can be integrated with Microsoft System Center Operations Manager with a Management Pack (plugin).

A different approach is by monitoring the devices with plain SNMP traps. This blogpost will guide you through the configuration process of SNMP on Brocade SAN switches. We will be using Microsoft System Center Orchestrator to collect the information in the SNMP traps and push it into a incident management tool. The use of Microsoft System Center is not mandatory, a Nagios could do the trick as well. Continue reading

How to upgrade the CommVault software on a CommServe DR

CommVault allows the installation of a CommVault Simpana Disaster Recovery CommServe to ensure the operations can be easily resumed after losing the active CommServe due to system outage or site-loss.

In this particular setup, we are using a protection mechanism on three different levels:

  • Microsoft SQL Database mirroring has been configured to allow a near-zero replication to reduce configuration data loss when an issue occurs;
  • CommVault Simpana Disaster Recovery Storage Policy which is dumping the database to a network location every 6 hours;
  • CommVault Simpana Disaster Recovery Storage Policy which copies the database dump to a tape every 6 hours.

The interactive GUI-based installer requires the databases to be available in read/write modus and the services to be able to be started. The specific configuration on the CommServe DR doesn’t allow a service restart (DB is unavailable), nor the DB upgrade (DB is in mirroring modus). Hence, the installation will be unsuccessful. Continue reading

A guide in updating Brocade SAN switch firmwares

I would like to share some of my personal best-practices in upgrading firmwares on Brocade SAN switches and/or directors. This blog post is divided in several sections:

  • Pre-upgrade tasks;
  • Upgrading the SAN switch;
  • Post-upgrade tasks;
  • Alternative FTP server;
  • Putty log of a SAN switch upgrade.

First of all, it’s important to mention every Brocade SAN switch has two firmware partitions. The Fabric OS (also referred to as “FOS“) is booted from the active partition, whereas the secondary partition is used to provide the ability to perform a non-disruptive firmware upgrade or as a fallback mechanism in case the firmware on the primary partition is damaged.

SANSWITCH:admin> firmwareshow
Appl Primary/Secondary Versions
FOS v7.0.0c
FOS v7.0.0c

The firmware on the SAN switch can be upgraded disruptively or, non-disruptively which will take some more time. When you are upgrading SAN switch components in a live production environment, it’s highly advisable to use the non-disruptive approach.

The firmware upgrade path can be collected from the “Brocade Fabric OS vA.B.CD Release Notes“. In general, we can say non-disruptive upgrade is supported from the previous version (identified as “B” in the version information). For example: v7.1.2b > v7.2.1c > v7.3.1d > v7.4.1. Continue reading

A step-by-step guide to configure NetApp CDOT NDMP in combination with CommVault Simpana 10

There are some differences in setting up the CommVault Simpana NDMP iDataAgent in combination with NetApp Clustered Data OnTap (CDOT) compared to a 7-mode filer.

This post explains the configuration process to start backing up snapshots stored on a SnapVault NetApp running CDOT by using two-way NDMP.

In case you are using three-way NDMP to protect the data stored on the filer, some of the steps are not required. I will list those throughout the procedure.

If you need a better understanding about two-way NDMP vs. three-way NDMP, please refer to my blogpost called: “Lingo Explained: 2-way NDMP vs. 3-way NDMP”.

Before you start, ensure the CommVault Simpana software is running on a supported version. My environment is running CommVault Simpana 10 Service Pack 12 when I performed this integration. More information about the CommVault configuration prerequisites and – requirements can be found on CommVault BOL. The configuration process of NDMP on NetApp CDOT is outlined in a document called “Clustered Data OnTap 8.3 NDMP Configuration Express Guide“.


  1. Try to use controller-based protocols (such as CIFS & NFS) as much as possible. The filer is able to see the contents in these folders, which results in an increased flexibility in the restore process (compared to FC – or iSCSI LUN emulation).
  2. Create one NDMP subclient per volume instead of using one subclient containing all volumes. By splitting a higher level of performance can be achieved as data restores require less tapes to be mounted resulting in an increased recovery (limitation of the way NDMP works).
  3. NDMP backup data cannot be used in combination with Content Indexing (End User Search). If this is a business requirement, consider to use a pseudo client.
  4. Data archiving and stubbing is not supported on NDMP subclients. If this is a business requirement, consider to use a pseudo client.


  1. Before executing the steps outlined in this procedure, verify if all required DNS-entries are created.
  2. In case you are using 2-way NDMP, make sure the tape devices are presented to all controllers within the cluster before executing the procedure outlined below.

Continue reading

Lingo Explained: 2-way NDMP vs. 3-way NDMP

NDMP or the Network Data Management Protocol can be used to secure data stored on NAS filers (such as NetApp, Hitachi HNAS, EMC Isilon, …). The protocol has been developed by NetApp, before it became an open standard maintained by Storage Network Industry Association (SNIA).

The initial goal to develop the NDMP protocol was to provide a standardized framework which can be used by any backup software, to avoid the need to deploy custom agents on the NAS filer. When no agents could be deployed – which occurred in most cases -, the backup admin was obligated to secure the data by using a proxy server which mounted the volumes by using the CIFS or NFS protocol.

In general we can say the NDMP protocol supports two data protection mechanisms: two-way – and three-way NDMP. Continue reading

Lingo Explained: Active Archive vs Deep Archive

Archives are created as a part of Information Lifecycle Management. Different use case scenarios exist, such as compliance archiving and storage capacity optimization. In general we speak about two different archive types:

Active Archives contains data which is too valuable to be deleted and is accessed occasionally by users. The data is retained in read-only mode as it’s required to be read from time to time without the need of modification. Active archives are usually protected by a replication mechanism as users still consult the data, instead of a general backup mechanism. This methodology allows a quicker recovery when the data location is lost.

Deep Archives stores data which is probably not accessed anymore. The data is retained for compliance reasons or by a specific request from the business. Deep archives are usually protected by a backup mechanism as the access frequency is low and the company can justify the time needed to recover the archive and/or application.

Continue reading

WireShark: “No interfaces found” on Microsoft Windows

When I start Wireshark, sometimes I’m unable to select the network interface to be used to analyze network traffic. I was able to resolve this by restarting a service called “NetGroup Packet Filter Driver“. Please note, this service can not be found in “Computer Management > Services“.

The procedure below can be followed to resolve this:

  1. Open a Command Prompt with administrative privileges.
  2. Execute the command: “sc query npf” and verify if the service is running.
  3. Execute the command: “sc stop npf” followed by the command: “sc start npf“.
  4. Open WireShark and press “F5

Hope this helps!
Continue reading