Best Practices - Solaris File System
Eliminating Backup Failures
You can use filters to exclude items which consistently fail and that are not integral to the operation of the system or applications. Some items fail because they are locked by the operating system or application and cannot be opened at the time of the data protection operation. This often occurs with certain system-related files and database application files. Also, keep in mind that you will need to run a full backup after adding failed files to the filter in order to remove them from backups.
Avoiding Duplicate Content Backups on Clustered File Systems
Note: When backups are run from multiple nodes on clustered file systems, the same content is backed up multiple times. Backups should be run on only one node to avoid such duplicate content backups. However, the other nodes can also be configured to run backups but make sure that the clustered file system mount point is added to the file system exclusion list of the physical machine's subclients.
Reconfiguring Default Subclient Content
We recommend that you do not re-configure the content of a default subclient because this would disable its capability to serve as a catch-all entity for client data. As a result, some data will not get backed up or scanned.
Folder Rename Operation Between Backup Jobs
Between backup jobs, if a folder is deleted and another folder is renamed to the same name, and if the file modified time (mtime) and changed time (ctime) did not change, then the next incremental or differential backup job does not back up the contents of the renamed folder. To back up the renamed folder, we recommend to enable the Optimized Scan option or the Reconcile Backup option from the subclient properties.
Restore by Job
Avoid running restores by jobs for jobs associated with the default backup set, if you do not want to restore the operating system files or directories. The entire content of the backed up client will be restored and the client where you are restoring might run out of space.
Resource Control Groups for Commvault
Resource Control Groups for Commvault is a mechanism to control CPU and other resources for Commvault processes so that they operate within the set constraints. See Resource Control Groups for Commvault.
Optimizing the CPU Usage on Production Servers
In virtualized environments (e.g.,LPAR, WPAR, Solaris Zones etc.,) where dedicated CPUs are not allocated, backup jobs may result in high CPU usage on production servers. The following measures can be taken to optimize the CPU usage:
- Set the appropriate priority for the backup jobs using the dNICEVALUE registry key to restrict allocation of all the available CPU resources to a specific backup job. By default, Commvault processes run at default priority on the client computers. If there are available CPU cycles, then Commvault processes will use the available CPU for backup and restore operations. If the CPU is being used by other application or system processes, Commvault processes will not preempt them. In such cases, if you want to give higher priority to other application or system processes, which are running at the default priority, you can modify the priority of the Commvault process using the following steps:
- From the CommCell Browser, navigate to Client Computers.
- Right-click the <Client> and click Properties.
- Click Advanced and then click Additional Settings tab.
- Click Add.
- In the Name field, type dNICEVALUE.
The Category and Type fields are populated automatically.
- In the Value field, type the appropriate value.
For example, 15.
- Click OK.
Note: Restart the services on the client after setting this key.
- Client side compression, encryption, and deduplication operations also consume considerable CPU resources. Moving these operations from the client to the MediaAgent will help reduce the additional CPU load.
- Using a proxy server for IntelliSnap operations will move the CPU load onto the proxy thereby decreasing the overhead on the production servers further.
Last modified: 12/8/2017 10:21:11 AM