In most cases I see people re-using the backupscripts they once created for another customer. Not a lot of people are aware there is a powershell script available in the installation directory of Enterprise Vault that will do the task for you.
In the Templates directory in the Enterprise Vault installation directory (in my case E:\Program Files (x86)\Enterprise Vault\Reports\Templates\) you can find a script which is called Transform-Backup.ps1.
Wheb backing-up a Windows Client, sometimes the error “Fallback to legacy filesystem backup was not allowed” occurs. In most cases this is related to the CONFIGURATION object, but recently I noticed this also on the D-drive of a (SQL) server.
[Critical] From: VBDA@client “fqdnoftheclient [/D]” Time: 3/25/2014 8:16:09 AM Fallback to legacy filesystem backup was not allowed. Aborting the backup.
[Critical] From: VBDA@client “fqdnoftheclient [/D]” Time: 3/25/2014 8:16:09 AM
[81:52] /D Not a valid mount point => aborting.
The software tries to start a backup by using the VSS integration and when this fails it fallbacks to the legacy method. If this is not defined in the backupjob properties, the backupjob will fail with the error mentioned above.
Out-of-the-blue we encountered that two of our Oracle databases started failing with the error code “62:342″ and description “Encountered an I/O error while performing the operation”. The odd thing is that we have more than 15 databases on the server and only two of them were having the issue.
Important to note is the fact that the server itself is a MediaAgent, which means the server has access to a shared tape device (in our case a VTL). The VTL itself is shared with approximately 90 MediaAgents and the storage policy used for database backup is associated with more than 250 databases. And… only 2 were failing…
There are two ways allowing a backup and restore of the Data Protector Internal Database. The procedures described below can also be used for performing migrations. The internal database is located in the DB40 folder within the installation directory.
The first way is relatively easy: create a backup job in the Data Protector Manager Console and select the Internal Database in the selection tree. The downside of this method is the lack of control where the database is stored (on a tape, but which one? on disk, in a bunch of files!) and how fast can it be restored in case of disaster. If you did not implement any fallback mechanisms, a catalog operation will be required! Needless to say it will take some time to complete.
Dumping the data to disk can come in very handy as a safety copy during maintenance operation such as upgrades, patchings, etc.
For a project, we needed to implement a VEEAM Backup & Replication setup. During the project we encountered some virtual machines with Physical RDMs. VEEAM cannot handle this, so it was needed to write a script allowing us to list all virtual machines with their most interesting parameters, such as:
Virtual Machine Name
The function Get-VMFolderPath can be found on this blog.
I started with a Backup Exec 2012 project for a larger company some months ago. One of their demands was to build a high-available solution with enough scalability for the next two years. The first thing that pops-up in my mind was an implementation of a Backup Exec Central Administration Server Option (CASO) with catalog replication. That should do the trick!
What are catalogs? The catalog and the database are required to perform restores of the data that has been written to tape. When losing the catalog, you are unable to select data to be restored. Compare it with a book without an index. You know it’s somewhere in the book, but can’t tell if it’s in chapter 1 or in chapter 25.. Ofcourse you have the ability to rebuild your catalog by performing a “catalog media” job. Needless to say this is a time-consuming job!
During the implementation, it came to my attention that this was not a redundant setup. I started a search with the best consultant on the world = Google and I was quiet disappointed with the results. Apparently nobody posed this question before or has written something about it. I saw this as an opportunity and wanted to clear this one out.
Some time ago, someone contacted me regarding configuration data he lost during cluster disk migrations. Apparently he performed a wrong action whereby all the share information was lost. It should be quiet easy to recreate them, but the customer had no documentation or whatsoever about it.
So there was only one solution… trying to restore the system state of the server to another location. During this process we came across with the issue that share information of a cluster is not stored in the “normal” location.