Verification of Deduplicated Data

Deduplicated Data Verification cross-verifies the unique data blocks on disk with the information contained in the DDB and the CommServe database. Verifying deduplicated data ensures that all jobs that are written as unique data blocks to the storage media are valid for restore or Auxiliary Copy operations.

The jobs containing invalid data blocks are marked with the Failed status. These invalid unique data blocks will not be referenced by the subsequent jobs. As a result, new baseline data for the invalid unique data blocks is written to the storage media.

For storage policies with silo copy, if the backup data volumes are moved to silo storage, the necessary volumes are automatically restored to disk during data verification job.

Tip

By default, the data verification schedule policy that is created by the system is not configured with data mover MediaAgents that use a cloud storage product, because the read operations from the cloud are very slow and are performed on low latency media. If necessary, you can perform the data verification on the cloud storage manually.

To run data verification on data that is stored on archive cloud storage, first recall the data to the main cloud storage location. Then you can run the data verification job on the recalled data.

By default, the deduplicated data verification is automatically associated with the System Created DDB Verification schedule policy. This schedule policy runs an incremental deduplicated data verification job every day at 11:00 AM on all the active DDBs in the CommCell that have the Verification of Existing Jobs on Disk and Deduplication Database check box selected. However, you can also run the data verification job at any time.

Note

  • By default, deduplicated data verification job uses 20 streams during Validate Data phase and 50 streams during Verify Data phase. You can modify the default values by using the Maximum number of threads to be used during Validate Deduplicated Data phase of Data Verification Job parameters in the Media Management Configuration dialog box. For instructions, see Media Management Configuration.

  • The system physically prunes the aged data during the phase-1 of DDB data verification operation or you can also manually delete the retained jobs. For more information on manually deleting a job, see Delete a job from the Copy.

  • You are recommended to configure the Network File System (NFS) mount option local_lock to none. If you set local_lock option to all, then locking issues occur during the data verification job.

Loading...