Commvault introduced a neat little option called “Backup Failed VMs only (Virtual Server)” which can be found in the advanced settings of the schedule policy backup task.
I reconfigured the schedule policy according to the following strategy:
(i) Backup task 1: daily incremental @ 7PM CET
(ii) Backup task 2: weekly synthetic full @ 10AM CET (DASH full only; no backup)
(iii) Backup task 3: daily incremental for failed virtual machines only @ 6AM CET
CommVault allows the installation of a CommVault Simpana Disaster Recovery CommServe to ensure the operations can be easily resumed after losing the active CommServe due to system outage or site-loss.
In this particular setup, we are using a protection mechanism on three different levels:
- Microsoft SQL Database mirroring has been configured to allow a near-zero replication to reduce configuration data loss when an issue occurs;
- CommVault Simpana Disaster Recovery Storage Policy which is dumping the database to a network location every 6 hours;
- CommVault Simpana Disaster Recovery Storage Policy which copies the database dump to a tape every 6 hours.
The interactive GUI-based installer requires the databases to be available in read/write modus and the services to be able to be started. The specific configuration on the CommServe DR doesn’t allow a service restart (DB is unavailable), nor the DB upgrade (DB is in mirroring modus). Hence, the installation will be unsuccessful. Continue reading
There are some differences in setting up the CommVault Simpana NDMP iDataAgent in combination with NetApp Clustered Data OnTap (CDOT) compared to a 7-mode filer.
This post explains the configuration process to start backing up snapshots stored on a SnapVault NetApp running CDOT by using two-way NDMP.
In case you are using three-way NDMP to protect the data stored on the filer, some of the steps are not required. I will list those throughout the procedure.
If you need a better understanding about two-way NDMP vs. three-way NDMP, please refer to my blogpost called: “Lingo Explained: 2-way NDMP vs. 3-way NDMP”.
Before you start, ensure the CommVault Simpana software is running on a supported version. My environment is running CommVault Simpana 10 Service Pack 12 when I performed this integration. More information about the CommVault configuration prerequisites and – requirements can be found on CommVault BOL. The configuration process of NDMP on NetApp CDOT is outlined in a document called “Clustered Data OnTap 8.3 NDMP Configuration Express Guide“.
- Try to use controller-based protocols (such as CIFS & NFS) as much as possible. The filer is able to see the contents in these folders, which results in an increased flexibility in the restore process (compared to FC – or iSCSI LUN emulation).
- Create one NDMP subclient per volume instead of using one subclient containing all volumes. By splitting a higher level of performance can be achieved as data restores require less tapes to be mounted resulting in an increased recovery (limitation of the way NDMP works).
- NDMP backup data cannot be used in combination with Content Indexing (End User Search). If this is a business requirement, consider to use a pseudo client.
- Data archiving and stubbing is not supported on NDMP subclients. If this is a business requirement, consider to use a pseudo client.
- Before executing the steps outlined in this procedure, verify if all required DNS-entries are created.
- In case you are using 2-way NDMP, make sure the tape devices are presented to all controllers within the cluster before executing the procedure outlined below.
NDMP or the Network Data Management Protocol can be used to secure data stored on NAS filers (such as NetApp, Hitachi HNAS, EMC Isilon, …). The protocol has been developed by NetApp, before it became an open standard maintained by Storage Network Industry Association (SNIA).
The initial goal to develop the NDMP protocol was to provide a standardized framework which can be used by any backup software, to avoid the need to deploy custom agents on the NAS filer. When no agents could be deployed – which occurred in most cases -, the backup admin was obligated to secure the data by using a proxy server which mounted the volumes by using the CIFS or NFS protocol.
In general we can say the NDMP protocol supports two data protection mechanisms: two-way – and three-way NDMP. Continue reading
In one of my earlier blogposts, I mentioned what you could do when you are confronted with a “corrupted” HP Data Protector software StoreOnce store. Today, we decided to delete the deduplication store to free up the diskspace.
Before a “HP Data Protector StoreOnce Software Deduplication Store” can be deleted, the media retained within needs to be exported. Please note, deleting the store in DataProtector does not mean the Store is deleted in StoreOnceSoftware.
The deletion required a set of steps to be executed. The plan of approach we followed, can be found below.
In one of the remote sites, the customer is running an HP Data Protector 9.02 environment in combination with a software-based HP Data Protector StoreOnce deduplication store based on a Windows Server 2012.
Unfortunately during the initial installation, the consultant did not implement any anti-virus exclusions causing the software deduplication store to get corrupted last week. Files which got deleted were from the type: “
Magically we were still able to bring the store back online in read-only mode and restore data from it! Thumbs up!
Today we received the following error message in the Data Protector GUI when we performed a barcode scan on a physical tape library: “
Bad catalog access for message #193 in set 65“. Yesterday I found some Data Protector foreign tapes which we tried to format, logically we thought these were the root cause of this error message. After exporting them out of the library, we noticed the error stayed.
Eventually we found out this error was caused by a miscommunication between the Cell Manager and the client which has the HP Data Protector GUI installed. These error messages can appear when there is a discrepancy in the Data Protector versions installed.
In our situation, we had the HP Data Protector Cell Manager running on version 9.04 and we were using a Data Protector Console of version 9.02. After patching the console, the error disappeared and we were able to successfully manage the cell again.
CommVault Simpana allows the installation of multiple instances on a single server. Each instance can operate simultaneously and independent from each other. The instance itself belongs to only one CommCell!
Each instance has it’s own:
- Binaries installed (for example: “D:\Program Files\CommVault\Simpana\” and “D:\Program Files\CommVault\Simpana2“);
- Updates installed (the other instance is treated as another server, hence updates need to be installed separately);
- Set of services (with each own GXadmin);
- Registry settings (“HKLM\Software\CommVault Systems\Galaxy\Instance00x“).
The important question is.. why would you want to do this? Some use cases: Continue reading
Data Aging is the process which deletes the data once the retention has been superseded. Additionally the data aging process defines the extended- and basic retentions.
Last week – after 1 year of successful execution – the data aging jobs started to fail with the following error: “[32:329] Failed to get prunable data“.
A set of different web consoles have been introduced as of CommVault Simpana v10. Every console has it’s specific purpose. In general we can say the console based on Microsoft Internet Information Services (number 1 in the list below) is restricted to administrators for daily management of the CommCell.
CommVault provides the following web pages:
- CommVault Simpana Administration Console: http://commserve:81/console
- Regular webconsole: http://webserver/webconsole/
- Compliance search: http://webserver/compliancesearch/
- Search admin: http://webserver/searchadmin/
- Lucene admin: http://searchnode:27000