Changes in Version 11

If you upgrade from a previous version, you will find that some features and options have been updated and enhanced. This page identifies the changes from version 10 to version 11 SP15, and provides brief descriptions. To read more about the features in this version, click the link listed in the Learn more column.

To see the changes in version 11 SP15 and later, under What's New, go to the Changes page for each service pack.

Upgrading from Version 10

If you are upgrading from Version 10, also review the following pages:

Upgrading from Version 9

If you are upgrading from version 9, also review the Changes page in the Version 10 documentation.

Deployment

Introduced in

Product

Change

Learn more

SP14

macOS Mojave

Commvault now supports 64-bit processes on macOS Mojave operating system.

Previously, the software only supported 32-bit processes on this operating system.

macOS Mojave Version 10.14: Considerations

SP14

Apache Tomcat Server

When you install a Commvault software package that automatically installs the Apache Tomcat server, Tomcat version 8.5.33 is installed.

Previously, version 8.5.5, 8.5.9, or 8.5.15 was installed.

SP14

Java

Commvault software uses Java SE Runtime Environment (JRE) 10.0.2.

SP13

Downloading software using the CommCell Console

The following behavior changes are available when you download software using CommCell Console:

  • You can no longer download V10 service packs and updates.

  • Starting in Service pack 13, the highest service pack version present in the cache is used for pushing updates.If you want to use another service pack version for pushing updates, then you must delete the contents of the cache and download the desired service pack version.

Downloading software using the CommCell Console

SP13

Outlook Add-In

Microsoft .NET Framework 4.5 is installed with all Outlook Add-In packages.

Previously, .NET Framework 4 was installed.

Outlook Add-In Packages

SP13

Service pack Installations

New Hadoop installation package

You must install the Hadoop installation package on all nodes that you want to use for Hadoop backup and restore operations.

Previously, you installed the File System Core and File System Packages on data access nodes.

Preinstallation Checklist for Hadoop on Linux

SP12

Decoupled Client Registrations

Previously, you could register a decoupled client installed with a lower service pack than the CommServe software. Starting in Service Pack 12, registering a decoupled client that is installed with a lower service pack than the CommServe software, is not supported.

Client Registrations

SP12

Service pack installation

Exchange agent installation packages are merged into one installation package

Previously, you installed each Exchange agent separately and configured them during the installation process. Starting with Service Pack 12, you install only one Exchange package on a client, and then add and configure only the agents that you want to use. In addition to the Exchange agents, the Exchange installation package includes the ContentStore Mail Server.

Preinstallation Checklist for Exchange Agents

SP12

systemd Linux support

Commvault software utilizes systemd for initialization

Commvault software uses systemd for initialization for the following distributions:

  • Red Hat Enterprise Linux and CentOS if systemctl is installed.

  • SuSE and OpenSuSE if systemctl is installed.

  • Debian 8.8 and higher.

  • Ubuntu 16.04 and higher.

N/A

SP11

Service pack installations

Client Readiness Report

Microsoft Visual C++ Redistributable 2017 (vcredist2017.exe) is automatically installed. If a computer does not have the latest Windows patches, the installation of Microsoft Visual C++ Redistributable 2017 will fail.

The following updates must be installed.

  • KB2919355

  • KB2939087

  • KB2975061

  • KB2999226

    Microsoft Windows 10 and Microsoft Windows Server 2016 come with the Universal C Runtime already installed, so only the basic VC2017/2013/2010 redistributable installation may be required.

The Client Readiness report identifies clients that need to be patched before moving to SP11.

Commvault Store: SP11 Client Readiness report

SP10

Service pack installations

Installation Media

Shortcuts and services are reinstalled as a part of service pack installations.

The display names and icons for the shortcuts and services are set as per the brand information that is available in the installation media.

To use the existing brand images, you must copy the images to the installation media in the following folder:

Installation Media Folder/Common/OEM/ID/install_images

where Installation Media Folder is the location that you specified during the creation of the installation package or the location of the Software Cache directory.

SP9 to SP13

Apache Tomcat Server

When you install a Commvault software package that automatically installs the Apache Tomcat server, Tomcat version 8.5.16 is installed.

Previously, version 8.5.5 or 8.5.9 was installed.

SP9

MongoDB and Message Queue installations

MongoDB and Message Queue ( Apache ActiveMQ) packages are now automatically installed with the Web Server software.

Previously, these packages were installed on the CommServe computer.

SP7

Command Center

The Command Center is now included with the Web Console package. You must install the Web Console to have access to the Admin ConsoleCommand Center.

Previously, the Command Center was a separate package that you could select from the installation wizard.

SP7

Licensing changes for pseudoclients

The following pseudoclients now consume the license that corresponds to their agent:

  • Exchange Database Availability Group (DAG) clients

  • SharePoint Farm clients

  • SQL Always On clients

  • Pseudoclients for DB2 MultiNode, Oracle RAC, SAP HANA, and Distributed Applications (Greenplum, Hadoop, and GPFS)

  • Virtualization clients

    Previously, the pseudoclients that are listed above did not consume a license.

Notes

  • After you upgrade the CommServe computer to SP7 (or a later version), proxy clients configured in a virtual server environment will not consume a license. Any license used by proxy clients will be released.

  • The Commvault software releases a license from a client that is not performing backups and assigns the license to the pseudoclient.

SP7

Installing clients

When an administrator installs a client, the administrator is no longer assigned as the client owner. If the administrator is a tenant (limited to a subset of clients), you can create a smart client computer group so that the tenant administrator retains access to the clients he or she installs. Create the client computer group with the following properties:

  • Set Owner to the master user group.

  • Add the Company Installed Client Associations automatic association rule and set it to the company the tenant administrator belongs to.

  • On the Security tab, assign the tenant administrators the appropriate roles. Do not include a role with the Change Client Associations permission.

CommCell Management

Introduced in

Product

What's Changed

Learn more

SP14

Command Center

We changed the name of Admin Console to Command Center.

Commvault Data Protection Solutions

SP13

Log Monitoring

A user can view the logs of a client for which the user has:

  • View or Master role assigned for the client.

  • Execute Monitoring Policy permission for the monitoring policy associated with the client.

Log Monitoring

SP13

Scheduling

Schedule Policy

In a multiple service provider CommCell environment, when a tenant administrator creates a schedule or schedule policy, other members of the tenant administrator group also have permission to view and modify the schedule or schedule policy.

SP13

Additional Settings

When adding an additional settings, the comments field is required.

Managing Additional Settings

SP12

Alerts

A <JOB OPERATION> token is available for data protection and data recovery alerts. The value for the token is the type of job, such as Snap Backup or Backup Copy.

Available Alert Tokens

SP12

Scheduling

Schedule Policy

The user interface labels have changed for Automatic Database Log Backup.

SP12

Services

A new service, called GxFWD (Commvault Network Daemon), is now responsible for tunneling Commvault connections across firewalls.

Description of Services

SP10

Automatic Schedules

For automatic schedules, the detect modified file is turned on by default, so the file management option is no longer available .

Automatic Schedules

SP10

Security

If you AD domains registered with your CommCell environment, you can configure a default domain so that AD users can log on without typing the domain name as a prefix.

SP9

Privacy

The Privacy feature is available for most agents. It is not available for UNIX database agents.

Privacy for Owners

SP8

Send Log Files

FTP Upload Option Removed

The default method for sending log files is now HTTPS upload.

The options to upload log files to an FTP location and to use the last successful upload method were removed. Because the FTP option was removed, the KeepAliveTimeout additional setting is no longer valid. The option to save log files to a local or network location remains.

Previously, you could send log files to an FTP location or use the last successful upload method.

Sending Log Files

SP7

Command Center and Web Console

Push Notifications

The Command Center and Web Console now support push notifications for jobs, events, and alerts through the installation of Apache ActiveMQ. Apache ActiveMQ is a third-party software that manages message queues. Push notifications are scalable and less resource intensive than pull notifications.

SP7

Alerts

User and User Group Alerts

New alert types are available: Configuration - Users and Configuration - User Group. Use these alerts to monitor changes made to user properties and user groups properties.

Predefined Alert Criteria

SP7

Client Computer Groups

Permissions

There are new permissions for client computer groups.

User Security Permissions and Permitted Actions by Feature: Client Computer Group

Security Associations

When you upgrade, new security associations using the predefined Client Group Creator role and new client computer group permissions are added to users:

User Identity/Existing Security Association

New Security Association - Predefined Role or Permission

New Security Association - Level

Creator of the client computer group

Client Group Creator role

Client Computer Group

Add, delete and modify a user or Add, delete and modify a user group permissions on any client computer group

Client Group Creator role added

Client Computer Group

Administrative Management at the client computer group level

Change Client Associations and Delete Client Group permissions added

Client Computer Group

Smart Client Computer Group Rule Rename

The Associated Client Group smart client computer group rule is now renamed to Client Group.

Rules Available for Smart Client Computer Groups

SP7

Job Management: Commit

The Commit option has been combined with the Kill option. When jobs are terminated using the Kill option, eligible backup jobs from supported agents are committed. The Commit option is no longer available in the right-click job menu.

Killing a Job in the Job Controller

CommCell Disaster Recovery

Introduced in

Product

Change

Learn more

SP11

CommServe Recovery

Copy Precedence tab removed for Restore by Job ID

For CommServe disaster recovery, the Copy Precedence tab is removed from the Advanced Restore dialog box for the Restore by Job ID operation.

None

SP10

Disaster Recovery (DR) Backup

DR Backup Job Management

To ensure successful completion of DR Backup jobs, 2 Full and 1 Differential DR Backup can now remain active at any given time. Subsequent Full or Differential DR Backup jobs will fail to start with a message stating that a 'job is already running'.

Previously, multiple DR backup jobs were queued, resulting in no job progress due to circular dependencies.

DR Backup Job Management

SP8

CommServe Failover for CommServe Recovery

CommServe Failover package installation

The installer now provides the option to install the CommServe Failover package, which automatically performs the following tasks:

  • Creates a new instance

  • Installs both the SQL Server agent and High Availability Computing package in a new instance

    Previously, a new instance had to be manually created, and the SQL Server Agent and the CommServe Failover package (now called the High Availability Computing package) had to be installed in the new instance.

Installing the Production CommServe Host

and

Installing the Standby CommServe Host

Storage Management

Introduced in

Product

Change

Learn more

SP14

Disk Library

Credential Manager is now used to add or modify credentials for disk mount paths, to store, share, and update account credentials for shared resources.

Previously credentials had to be added individually for each mount path.

Disk Libraries - Getting Started.

SP14

Cloud Storage

HGST Storage has been renamed as Western Digital ActiveScale.

Supported Cloud Storage Products

SP13

Deduplication Database

The default value for Days to keep DDB on source location after successful move partition job is changed from 1 to 0.

Media Management Configuration: Deduplication

SP13

Secondary Copy (Synchronous Copy)

The option Space optimized auxcopy is changed to Space optimized Auxiliary Copy.

Secondary Copy (Synchronous Copy)

SP13

Cloud Storage

Volume Size Updates for Cloud Storage Mount Paths

The Process volume size updates for cloud mount paths option is enabled by default on new CommServe server installations.

Note

Note that this operation involves additional queries to the cloud storage library which may, in turn, result in additional costs incurred on cloud storage.

This change will be applied on the following versions:

  • New CommServe server installations in SP13 (or later)

  • Upgraded CommServe server from Version 10 to Version 11 SP13 (or later)

    This change will not affect existing cloud storage mount paths in V11 CommServe servers that are upgraded from a previous Service Pack to SP13.

Media Management Configuration: Service Configuration

(Locate Process volume size updates for cloud mount paths)

SP12

Data Encryption

The configuration Disallow changes to encryption settings on storage policy is removed from the Service Configuration tab in the Media Management Configuration dialog box. You can configure the Prevent changes to copy software encryption settings setting at the global level.

Media Management Configuration: Service Configuration

Configuring Global Level Software Encryption Settings

SP13

MediaAgent Installation

MediaAgent Core Package is Renamed

The MediaAgent Core package is renamed to Storage Accelerator. This package is displayed in the Version tab of the MediaAgent Properties and the Client Computer Properties dialog box when you install the MediaAgent package.

Previously the MediaAgent Core package was displayed in the Version tab of the MediaAgent Properties and the Client Computer Properties dialog box.

SP12

Cloud Storage - Amazon S3

Input Format for Service Host Name for Amazon S3

The input format for Service Host Name for Amazon S3 is now changed to s3.[region].amazonaws.com.

Previously, the input format for Service Host Name was s3.amazonaws.com.

Online Help - Add / Edit Cloud Storage (General)

(Locate Amazon S3)

SP12

Cloud Storage - Amazon Glacier

Input Format for Service Host Name for Amazon Glacier

The input format for Service Host Name for Amazon Glacier is now changed to glacier.[region].amazonaws.com.

Previously, the input format for Service Host Name was glacier.amazonaws.com.

Online Help - Add / Edit Cloud Storage (General)

Locate Amazon Glacier)

SP12

Media Management

Do not create dependent storage policy when creating a disk/cloud storage pool or a global deduplication policy

The Do not create dependent storage policy when creating a disk/cloud storage pool or a global deduplication policy option in Media Management: Service configuration, is now enabled by default, to disable the automatic creation of storage policy associated with the Global Deduplication Policy.

Previously, this option was disabled by default.

Media Management Configuration: Service Configuration

SP11

Indexing Version 2 index backup

Creation of one index server client for each storage policy

The system now auto-creates one index server client for each storage policy associated with an Indexing Version 2 client. This change provides optimal recoverability of data in disaster recovery situations.

Index Backup

SP11

Scaleout Storage Pool

Default Chunk Size Set to 8 GB

The default chunk size for data paths in storage policy copies associated with Scaleout Storage Pools is set to 8 GB.

Previously, the default chunk size was 4 GB.

Note

This change will be applied on storage pools that are created after you install SP11 and will not affect existing pools that you created before SP11.

CommCell Performance Tuning - Controlling the Chunk Size for Data Path

SP10

Cloud storage library

Cloud Storage Library Names Renamed

Ali Cloud Object Storage Service has been renamed as Alibaba Cloud Object Storage Service.

Oracle Cloud Storage has been renamed as Oracle Cloud Infrastructure Object Storage Classic.

Oracle Cloud Storage Archive Service has been renamed as Oracle Cloud Infrastructure Archive Storage Classic.

Oracle Cloud Infrastructure Object Storage Service has been renamed as Oracle Cloud Infrastructure Object Storage (S3 Compatible).

Online Help - Add / Edit Cloud Storage (General)

SP10

Cloud storage library

Modifications to Chunk Size for Cloud Library Data Path

For storage policies associated with cloud library data paths, if the chunk size is set to less than 1 GB, then the chunk size will be automatically set to 4 GB and the Use Application Setting option available on the Data Path Properties window is enabled.

Data Path Properties - Online Help

SP9

Auxiliary copy

Preempting Auxiliary Copy Jobs

You can control the preemption of auxiliary copy jobs (by another auxiliary copy job) using the Allow preemption between auxiliary copy jobs setting in the Media Management Configuration: Auxiliary Copy parameters.

Previously, the RMAuxCopyInterruptAuxCopy Additional Setting was used to control preemption.

FAQ - Can I preempt an auxiliary copy operation with another auxiliary copy operation?

Media Management Configuration: Auxiliary Copy

SP9

Cloud storage library

Change in Cloud Storage Archive Recall Workflow

The Cloud storage archive recall workflow is now enhanced to recall precise data when deduplication is enabled in the storage policy copy.

Previously the workflow restored the entire volume when deduplication was enabled in the storage policy copy.

Restoring Data from Archive Cloud Storage Using a Workflow

SP9

Media Management

Skip low watermark alerts for mount paths disabled for write

The Skip low watermark alerts for write disabled mount paths option in Media Management: Service configuration, is now enabled by default.

Previously, this option was disabled by default.

Media Management Configuration: Service Configuration

SP7

MediaAgent

MediaAgent Properties from Client Node

If the client and MediaAgent are installed on the same computer, then you can now access the MediaAgent properties by selecting Client Computers > Client > View > MediaAgent Properties.

None

SP7

Media Management

Data Aging

The Delete deconfigured clients that have no protected data parameter now checks that the software on the client is uninstalled.

Media Management Configuration: Data Aging

SP7

Media Management

Default Settings for Retaining Scratch Media

By default, scratch media is retained when a tape library is deconfigured. (By default, the value of the Retain scratch media information when deconfiguring library parameter is set 1.)

Previously, scratch media was recycled when a tape library was deconfigured.

Retain scratch media information when deconfiguring library

SP7

Protecting Mount Paths from Ransomware

Secure Disk Storage has been renamed to Ransomware protection

The option to enable Ransomware protection on MediaAgents is now renamed as Ransomware protection on the MediaAgent Properties - Advanced tab.

Enabling Ransomware Protection on MediaAgents

Data Management

Introduced in

Feature

Change

Learn more

SP14

Deduplication

Signatures Retention on Client Side Disk Cache

If enabled, then by default, the signatures are retained for 40 days on the client side disk cache location.

Previously the signatures were retained for 14 days on the client side disk cache location.

Deduplication - Advanced Client Properties

SP14

Deduplication

Deduplicated Database (DDB) Reconstruction

In case a DDB reconstruction job fails and has to be restarted, then it will restart from the point of failure.

Previously, if a DDB reconstruction job failed, it restarted from the beginning.

Manually Recovering the Deduplication Database

SP14

Storage Policy Copy

Optimizing Deduplication Percentage on Storage Policy Copy

The Space Optimized Auxiliary Copy option is enabled by default on a storage policy copy with deduplication to achieve an optimized deduplication percentage during auxiliary copy operations.

Previously this option had to be enabled manually on a storage policy copy with deduplication.

SP14

Storage Policy

Change in Default Block Size

The following changes are available for the block size of the storage policy and the global deduplication storage policy:

  • A newly created deduplicated storage policy or global deduplication policy with a cloud library as the data path will now have 512 KB as the default block size. This enhancement is to improve the read performance from the cloud storage.

    Previously, the default block size of a deduplicated storage policy or a global deduplicated policy that is associated with a cloud library was 128 KB.

  • A newly created secondary copy using a global deduplication policy (irrespective of the primary copy block size) with a cloud library as the data path will now have 512 KB as the default block size.

  • A newly-created deduplicated secondary copy will honor the block size that is set on the storage policy properties.

    Note

    It is recommended to set the Network Optimized Copy option to increase auxiliary copy throughput.

  • For a newly created deduplicated storage policy or a global deduplication policy with a disk library as the data path and associated secondary copies with cloud library as a data path, during the auxiliary copy operation, the higher block size is honored (regardless of where it is set).

    For example, if the primary copy that has a disk library as the data path has 128 KB block size, and if the associated secondary policy copy has a cloud library as the data path with 512 KB block size, then the block size of the secondary copy is honored during the auxiliary copy operation.

None

SP13

Deduplication

In the Media Management Configuration dialog box, on the Resource Manager tab, the parameter Maximum number of parallel data transfer operations for deduplication database is renamed to Maximum number of parallel data transfer operations for deduplication engine.

Media Management Configuration: Resource Manager

SP13

Auxiliary Copy

Advanced Auxiliary Copy Option is Renamed

In the Advanced Auxiliary Copy Job Options dialog box, on the Scalable Resource Allocation tab, the Total Jobs to Process option is renamed to Max Jobs to Process.

SP13

Data Aging for Deduplication

Optimized Pruning of Deduplicated Data

To optimize the deduplicated data pruning, you can configure the following additional settings for disk and cloud library respectively:

  • DedupPrunerThreadPoolSizeDisk

  • DedupPrunerThreadPoolSizeCloud

    Previously, the following additional were used to optimize the deduplicated data pruning on disk and cloud library respectively:

  • DedupMaxDiskZerorefPrunerThreadsForStore

  • DedupMaxZerorefPrunerThreadsForStore

Data Aging for Deduplication

SP13

Deduplication Database Verification

Support for Incremental Data Verification on Deduplicated Data

Incremental data verification operation is now supported for all the options of the DDB data verification operation.

Previously, incremental data verification job was supported only for the following:

  • Verification of Deduplication Database

  • Verification of Existing Jobs on Disk and Deduplication Database

Performing a Data Verification Operation on Deduplicated Data

SP12

Deduplication

DDB Backups fail if Snapshot Creation Fails

If VSS (Windows) or LVM (Linux) snapshot fails, then by default, the DDB backup fails.

Previously, if VSS (Windows) or LVM (Linux) snapshot failed, then by default, the DDB backup used the live volume to back up the DDB.

Deduplication Database Backup

SP11

Deduplication

Time Changed for DDB Backups

The DDB Backup is automatically associated with a System Created DDB Backup schedule policy.

This schedule policy runs a full backup job at 04:00 PM every 24 hours. If you manually set some other time window for DDB backups, then your manually configured time window will be honored even after a service pack upgrade.

Previously, the schedule policy ran a full backup job every day at 12:00 AM.

Deduplication Database Backup

SP11

Deduplication

Physical Pruning is Disabled During the Add Record Phase of the Full Reconstruction Job

During the Add Record phase of a full reconstruction of the deduplicated database (DDB), the physical pruning of the data is disabled so that the Add Record phase does not fail.

Previously, the physical pruning of the data was not disabled during the Add Record phase of a full reconstruction of the DDB.

None

SP11

Subclient Content

The following changes are made to the content library of a file system subclient.

New content categories:

  • Virtual Machine

  • Scripts

  • Temporary files

  • Email Files

    Note

    If Archives category is present in the content library, then the Email Files category is automatically added when you upgrade to SP11 or later service packs.

    New file extensions:

  • Archives - *.7z, *.gz, *.tar

  • Audio - *.acm, *.avi,

  • Executable - *.dll, *.bin, *.dmg, *.dylib, *.ipa, *.iso, *.lib, *.msi, *.pkg, *.rpm. *.so

  • Image - *.cr2, *.dvt, *.dwg, *.ithmb,*.shs,*.vsdx

  • Office - *.acl, *.one, *.ops, *.pgs, *.pst, *.pub, *.rdf, *.tsv, *.txt, *.wbk, *.xml

  • System - *.bkf, *.dat, *.dbk, *.gho, *.ghs, *.par, *.iff, *.inf, *.par, *.pqi, *.prn, *.qic, *.rdp, *.rom, v2i

  • Video - *.dtv, *.hds, *.ogm, *.srt, *.vob

    File extensions that were removed or moved:

  • *.pst moved from Archives to Office category

  • *.iff moved from Audio to System category

  • *.bat, *.cgi, *.vb, *.vbs are moved from Executables to Scripts category

  • *.avi moved from video to audio category

  • *.rm is removed from video category

  • *.mp4 is removed from audio category

Supported File Extensions in Content Library

SP10

Storage Policy

Change in Default Block Size

The following changes are available for the block size of the storage policy and the global deduplication storage policy:

  • A newly created deduplicated storage policy or global deduplication policy with a cloud library as the data path will now have 512 KB as the default block size. This enhancement is to improve the read performance from the cloud storage.

    Previously, the default block size of a deduplicated storage policy or a global deduplicated policy that is associated with a cloud library was 128 KB.

  • A newly created secondary copy using a global deduplication policy (irrespective of the primary copy block size) with a cloud library as the data path will now have 512 KB as the default block size. When the block sizes are different between the primary copy and the secondary copy, then the Disk Read Optimized Copy option will be processed as the Network Optimized Copy option.

  • A newly-created deduplicated secondary copy will honor the block size that is set on the storage policy properties.

  • For a newly created deduplicated storage policy or a global deduplication policy with a disk library as the data path and associated secondary copies with cloud library as a data path, during the auxiliary copy operation, the higher block size is honored (regardless of where it is set).

    For example, if the primary copy that has a disk library as the data path has 128 KB block size, and if the associated secondary policy copy has a cloud library as the data path with 512 KB block size, then the block size of the secondary copy is honored during the auxiliary copy operation.

None

SP9

Compliance Search

New Legal Holds Cannot be Created

If you did not create a legal hold before SP9, then you cannot create a legal hold.

Instead, use the Case Manager feature to hold data for legal and compliance purposes.

Case Manager Overview

SP12

Search Engine and Content Indexing

Preview Generation Is Off By Default for New Search Engine Installations

For new installations of the Search Engine, previews are not generated during content indexing jobs by default. Instead, previews are generated on-demand during end-user search and Compliance Search operations.

To export items from Compliance Search in HTML format, the items must have been content indexed by a Search Engine with preview generation enabled.

You can configure how previews are generated from the CommCell Console.

Configuring Preview Settings for Search Engines

SP7

Data Aging

You can configure a parameter to age off the incremental backups that have no data when the next data aging job runs.

Previously, if a full backup existed on multiple storage policy copies, then the incremental backup jobs that have no data were not aged until the full backup was aged from all the copies.

Honor copy retention for jobs with no data

SP7

Storage Policy

Basic Retention Rules for Data/Compliance Archiver Data

The option to set either days-based retention or infinite retention for archiving agents and compliance agents from the storage policy copy level is not available anymore.

However, you can set retention for archiver agents from the archiver subclient's Advanced Properties dialog box.

Retention Options for Archiving

Backup Agents

Introduced In

Product

Change

Learn more

SP14

Network Shares

Upgrade Data Access Nodes to Service Pack 14 When You Upgrade the CommServe Computer

For new subclients, after you upgrade your CommServe computer to Service Pack 14,you must upgrade the Data Access Nodes to the same Service Pack version. For older subclients, the data access nodes and CommServe computer can be at different service levels.

Previously, the Data Access Nodes and CommServe computer could be at different service pack versions.

Backup Error: All device streams configured to this Storage Policy including their multiplexing factor are in use. If the issue persists, please upload log files from CommServe, MediaAgent and Client and contact your vendor's support hotline with the job ID of the failed job.

SP14

Oracle

Oracle Live Sync Operation

You can perform a live sync replication operation to a non-standby database.

Previously, you can run the live sync replication to a standby or a non-standby database.

Live Sync Replication of Oracle Databases

SP13

All Backup Agents and Database Agents

Overwrite if file on media is newer Option Is Now Renamed

The Overwrite if file on media is newer option is now renamed to Overwrite if file in backup is newer in the Restore Options dialog box.

SP10

1-Touch for Windows

Filter System Files From Backups

System files are now excluded from backups only if the subclient is configured to perform system state backups.If the subclient content is configured for system state backups, the system files are backed up as part of system state backup.

Previously, by default, the system files were excluded from backups for all subclients.

Configuring System State Backups

SP9

1-Touch for Windows

Support for System Partition of Client on FAT 32 Volumes

1-Touch Recovery is now supported if the system partition of the client computer is located on the FAT 32 volume.

Previously, system partition of clients on FAT 32 volumes were not supported.

System Requirements - 1-Touch for Windows

SP9

Virtualize Me

Support for Generation 2 and UEFI Configurations

Virtualize Me for Hyper-V is now supported for virtual machines generated using Generation 2 specification and for computers configured with UEFI (Unified Extensible Firmware Interface) or EFI.

Previously, Generation 2 specifications and UEFI configurations were not supported.

System Requirements - 1-Touch for Windows

SP12

DB2

Operation Time Reduction for DB2 Load Copy Operations

A DB2 load copy operation takes between 20-25 seconds. This time becomes significant when you have thousands of operations that need to occur.

You can configure the software to registers the job ID and set up the pipeline, and then use the job ID and pipeline for 10 minutes. This additional setting reduces the operation time.

SP12

DB2

Change to DB2 Restore Option Defaults

The following DB2 restore option defaults have changed values for both traditional and IntelliSnap restores:

  • The backup image (the software uses the latest backup image by default).

  • The recover database option (the software does not recover the database by default).

  • The option to restore log files (the software does not restore the logs by default).

SP9

DB2, NAS, SAP for Oracle, SAP HANA

Enable the Backup Operations After a Delay

If you disable backup operations you can configure the date and time to re-enable backup operations.

SP8

IBM i File System Agent

OpenVMS File System Agent

Data Interface Pair Support

You can have a Data Interface Pair between the proxy and the IBM i client.

You can have a Data Interface Pair between the proxy and the OpenVMS client.

Getting Started with the IBM i File System Agent

Getting Started with the OpenVMS File System Agent

SP13

IBM i File System Agent

OpenVMS File System Agent

Multiple Proxy Client Support for Backup Operations

You can configure multiple proxy client computers for backup operations.

Adding the IBM i File System Agent

Configuration - OpenVMS File System Agent

SP13

NAS Client

Client Owner Assignment

The user who adds the NAS client becomes the client owner.

Additional permissions, such as Administrative Management at the CommCell level, are no longer required to manage the NAS client.

Adding a NAS Client

SP12

NDMP Agent

NAS Agent Renamed to NDMP Agent

The NAS agent has been renamed to NDMP agent. When you install a new client or configure a NAS agent installed in a previous service pack, look for NDMP agent.

Overview: NDMP File System Agent

SP9

NAS Agent

Load Balancing for NAS Backups

If load balancing is already configured for other agents and the additional prerequisites for the NAS agent are met, then backups of NAS agent data will use load balancing by default.

Configuring Load Balancing for NAS Agent Backups

SP7

NAS Agent

Restores of Directories with Preserve Level 0 Include Only Content

Previously, NAS Agent restores of directories configured with the preserve level set to 0 included the directory in the restored data. Now, direct restores of directories with preserve level 0 will only include the content in the directory, but not the directory itself.

Restore - NAS File System Agent

SP12

Oracle Agent

Oracle RAC

The Commvault software Turns off Oracle Optimization

If you have configured RMAN backup optimization for an Oracle instance, then the Commvault software permanently turns off optimization the first time that you perform a full backup or selective online full backup from the CommCell Console. This behavior ensures that the backups contain all the Oracle data files and that you can successfully restore the backups.

Oracle Backups

Performing an Oracle Full Backup

Performing an Oracle RAC Full Backup

SP8

Oracle Agent

Auto Discover Instance Enabled by Default

The Oracle auto discover instance feature is enabled by default.

The Commvault software only discovers instances that are in the NOMOUNT, MOUNT or OPEN state.

Configuring Automatic Instance Discovery for Oracle Databases

SP7

Oracle Agent

Clone an Oracle 12c Pluggable Database

You can clone an Oracle 12c database.

Cloning an Oracle 12c Pluggable Database (PDB)

Restore an Oracle 11gr2/12c Tablespace or PDB to a Point-in-Time

Use a point-in-time restore to revert the tablespace or PDB to a state before an undesired transaction or before a point of failure.

Restoring Oracle Tablespaces and Datafiles to a Point-in-Time

Oracle Application Migration to an Amazon RDS Database

Run a workflow to migrate an Oracle database to an Amazon RDS database.

Oracle Database Application Migration to an Amazon RDS Database

SP12

Salesforce

Salesforce Browse Pane Only Shows Modified Records

When you perform a browse operation on a Salesforce backup the Browse pane only displays the modified records.

Salesforce Restores

SP9

Salesforce

The Software Automatically Initiates an Incremental Backup

After a full backup completes, the software automatically initiates an incremental backup to minimize data inconsistency.

Performing Salesforce Full Backups

SP12

SAP HANA

SAP HANA 2.0 MDC IntelliSnap or Traditional Database Copy Source and Target SID Changes

If you have a SAP HANA 2.0 configuration and you are performing an IntelliSnap or traditional database copy operation, then the source SID and target SID must match for the SYSTEMDB.

Restoring a SAP HANA IntelliSnap Backup to a New Host or a New Instance (Database Copy)

Restoring a SAP HANA Backup to a New Host or a New Instance

SP12

SAP HANA

Restore the SAP HANA Catalog to a Point in Time

Beginning in Service Pack 12, you can restore the SAP HANA catalog to a point-in-time by specifying a time that is relative to the system date. Use this option only for special circumstances, for example database corruption.

SAP HANA Restores

Restoring the SAP HANA Catalog to a Point in Time

SP12

SAP HANA

SAP HANA Command Line Report for Archive Log Backup Details

You can run a command line report that provides detailed information on SAP HANA archive log backups.

SAP HANA Archive Log Backup Report

SP11

SAP HANA

The Commvault Software Automatically Detects the Correct Node for Successful Backup Job Execution

You no longer need to change the SAP HANA node order in a SAP HANA multinode environment for backup jobs.

Creating a New Instance for a SAP HANA Pseudo-Client

Configuring SAP HANA Replication

SP11

SAP HANA

Have the Commvault software skip the hbdbackint link update

If you want the log backups to go to disk, then set the nUSEBACKINTFORLOGANDCATALOG additional setting to 0 on all nodes.

Configuring the Software to Skip the global.ini File Update

SP10

SAP HANA

Persistent log backups Are on by Default

The persistent log backup feature ensures that you have all log files so that you can successfully restore your database.

Configuring SAP HANA Persistent Log Backups

Automatic update of the global.ini file and symbolic links

When you perform your first backup, the Commvault software automatically performs the following configuration.

  • Creates a symbolic link that points to a parameter file created by Commvault called “param” under the SAP “/user/sap/<SID>/global/opt/hdbconfig” directory.

  • Creates a symbolic link that points to the Commvault SAP backint binary called “hdbbackint” under the SAP “/usr/sap/<SID>/SYS/global/opt” directory.

  • Updates the global.ini with the correct parameters based on your Multitenant Database Container configuration, for example SAP HANA 1.0 or 2.0

Backup - SAP HANA Agent

SP8

SAP Oracle

Do Not Open the Database After an Out-of-Place Restore

You can choose to not open the database after an out-of-place restore. If you choose this option, then you must use BR*Tools 7.40 patch 30 or higher.

Performing a SAP for Oracle Database Copy for a Different Windows Configuration

Performing a SAP for Oracle Database Copy

SP9

SAP Oracle

Support for Standby Database Backups from the CommCell Console and the Commvault CLI

You can perform backups of SAP Oracle standby databases from the CommCell Console and the Commvault CLI.

SAP for Oracle Backups Using the CommCell Console

Command Line Interface - Backup - SAP for Oracle Agent

SP11

SharePoint Server Agent

Creation of New Document Backup Set and Site Collection Backup Set Not Supported

You cannot create document backup set and site collection backup set from the SharePoint agent. Instead, you can use offline mining to browse and restore granular data.

Offline Mining for SharePoint Server Agent

Back Up Child Sites during Office 365 Backup Operation

While running an Office 365 backup operation to back up a parent site, you will be able to back up all the child sites in the hierarchy by default.

Backing Up Office 365 Sites

SP11

SQL

Simplified Backup Strategy

The Commvault software skips databases that are simple recovery model, read-only, or not online during certain types of backups.

Back Up SQL Server Data

Configure the Number of VDI Retries

You can configure the number of times that the Commvault software tries to connect to VDI when there is a timeout

Changing the SQL Server VDI Timeout Setting

Backup Agents - Windows File System and UNIX File System

Introduced in

Change

Learn more

SP14

Automatically use optimal number of data readers

The software now automatically assigns the number of streams or readers required to perform the backup operations. The number of data readers that are assigned are based on the number of nodes configured for the subclient.

Previously, users assigned the number of data readers required to perform backup operations.

SP14

Upgrade Data Access Nodes to Service Pack 14 When You Upgrade the CommServe Computer

For new subclients, after you upgrade your CommServe computer to Service Pack 14,you must upgrade the Data Access Nodes to the same Service Pack version. For older subclients, the data access nodes and CommServe computer can be at different service levels.

Previously, the Data Access Nodes and CommServe computer could be at different service pack versions.

SP13

Overwrite if file on media is newer Option Is Now Renamed

The Overwrite if file on media is newer option is now renamed to Overwrite if file in backup is newer in the Restore Options dialog box.

SP13

Synchronize Source files and Backup Index Is Now Renamed Reconcile Backup

The Synchronize Source files and Backup Index check box is now renamed to Reconcile Backup in the Subclient Properties dialog box.

SP12

Retention Options Are Now Categorized Based on Job-Based Retention and Object Based Retention

The labels for subclient retention options are now more accurate and easier to understand. While the labels for the options have changed, the functionality remains the same. To provide more granular support, retention options are now divided into Job Based Retention and Object Based Retention and are labeled as follows:

Backup Retention is now Object Based Retention

  • After deletion keep items for N years N months N days is now Retain objects for n years in the Deleted item retention area.

  • After deletion keep items indefinitely is now Retain objects indefinitely in the Deleted item retention area.

SP11

Filtering Files That Are Eligible for Backup

When you terminate a backup operation, files that are eligible for backup are identified and backed up in the subsequent backup operation. If you filter these eligible files from your subsequent backup operation, the eligible files will not be backed up.

Previously, in the same scenario, the eligible files were backed up in the current or subsequent backup operation and were treated as deleted files. The deleted files were retained in the backup cycle based on your subclient retention and storage policy retention.

SP11

Preserve Timestamp of Folders

When you perform a restore operation, the timestamp of the restored folders on the destination computer is the timestamp of the folder when the folder was last backed up.

Previously, after a restore operation, the timestamp of the folder might have been the time at which the folder was restored.

SP11

Full Backups for File System Subclients

For Indexing V2 client computers, if you select the Extend storage policy retention check box for a subclient, then a full backup operation automatically converts to an incremental backup operation with a reference time set to 0.

For Indexing V1 client computers, if you set a non-zero retention value for a subclient, then a full backup automatically converts to an incremental backup operation with a reference time set to 0.

To start a new backup cycle, you must schedule synthetic full backups.

Previously for Indexing V2 client computers, for the same scenario, a full backup operation automatically converts to an incremental backup operation, followed by a synthetic full backup.

Previously for Indexing V1 client computers, for the same scenario, full backup operations ran as full backup operations.

SP11

Application Read Buffer Size

For default and user-defined subclients that are newly created from the CommCell Console, Command Center, or command line, the application read buffer size is now 512 KB.

Previously, the application read buffer size was 64 KB.

Note that from Service Pack 11 onwards, new baselines for deduplication signature will be created for all newly created subclients because of this change.

None

SP10

Synchronizing Data on the Disk and the Index

By default, the data is synchronized every time you run an incremental job after a synthetic full backup even if the Synchronize Source files and Backup Index check box is not selected in the Subclient Properties dialog box.

Previously, the synchronization job ran only if the Synchronize Source files and Backup Index check box was selected.

SP9

Incremental Backup Job with Deleted Items in Subclient Content

An incremental backup job now performs all phases of backup if there are only deleted items in the subclient content, and updates the index correctly for the deleted items. As a result, the deleted items are dropped from synthetic full backup jobs based on the retention settings.

Previously, the incremental backup job completed after the scan phase if there are only deleted items in the subclient content, without updating the index. Synthetic full backup jobs carried forward the deleted items to the next backup cycle.

None

SP8

Network Bandwidth Option in Subclient Properties Has Been Moved

The Throttle Network Bandwidth option has been moved to the Performance tab of the Advanced Subclient Properties dialog box.

Previously, the option was in the Data Transfer Option tab of the Subclient Properties dialog box.

Subclient Properties (Performance)

SP7

Labels for Retention Options Are Changed

For Backup Retention, the following labels are changed:

  • Use storage policy copy retention after n days for deleted files is now After deletion keep items for n years n months n days.

  • Retain Indefinitelyfor deleted files is now After deletion keep items indefinitely.

  • Use storage policy retention is now After deletion keep items for n years n months and n days where n = 0 days.

SP7

Labels for Additional File Versions Options Are Changed

The retain additional versions options are now more accurate and granular. While the labels for the options have changed, the functionality remains the same.

  • Keep at least n previous versions is now Keep n versions.

  • Keep older versions for n years n months, and n days is a new option that allows you to retain the older versions for a specified time period.

Configuring Retention for Additional File Versions

Backup Agents - Windows File System

Introduced in

Change

Learn more

SP14

Back Up Files in Shared-Write Mode

When you clear the Use VSS for all files check box in the Subclient Properties dialog box and run a backup operation, files that are in the shared-write mode fail to get backed up. These files are automatically attempted for backup again in the subsequent backup operation.

Previously, files in the shared-write mode were backed up partially.

Enabling Volume Shadow Service (VSS) for Windows File System Backups

SP12

Synchronizing Data On the Disk and the Index

For the synchronization operation, you must specify a minimum value of 7 days else the value is automatically set to a default value of 30 days.

Previously, you could specify a value less than 7 days.

Synchronizing Data on the Disk and the Index

SP10

Filter System Files From Backups

System files are now excluded from backups only if the subclient is configured to perform system state backups.If the subclient content is configured for system state backups, the system files are backed up as part of system state backup.

Previously, by default, the system files were excluded from backups for all subclients.

Configuring System State Backups

SP8

Incremental System State Backups

You can now back up only the system protected files that have been modified since the last backup. This results in faster backups and reduction in space used by the media.
This feature is available only for new clients. For upgraded clients, you must enable Indexing V2 to back up only the modified system protected files during incremental backups.

Previously, all system protected files were backed up with every incremental backup job.

Configuring System State Backups

SP7

File System Backups

If one of the paths specified in your subclient content does not exist, then the backup job completes with the Completed with Errors condition.

Before SP7, if one of the paths that is specified in the subclient content did not exist, then the backup job still completed successfully.

Backups Using the Windows File System Agent

SP7

Block-Level Backup Option Support with System State Backups

A block-level backup option is now supported with the Backup System State option for backing up system state data at the subclient level.

Previously, you could not enable the block-level backup option and the Backup System State option on the same subclient.

Backups Using the Windows File System Agent

Backup Agents - UNIX File System

Introduced in

Changes

Learn more

SP12

Synchronizing Data On the Disk and the Index

For the synchronization operation, you must specify a minimum value of 7 days else the value is automatically set to a default value of 30 days.

Previously, you could specify a value less than 7 days.

Synchronizing Data on the Disk and the Index

SP9

Stale NFS Mount Points

Stale NFS mount points are now detected during the scan phase and an event notification is generated for the user to take necessary action.

Previously, the scan phase stopped responding if there are stale NFS mount points in the scan content.

FAQ - How are stale NFS mount points detected during the scan phase?

SP9

Automatic Filtering of .zfs Snapshot Directories on Solaris

The .zfs directories under the root of all the volumes are now automatically filtered.

Previously, only the .snapshot directories under the root of all the volumes were automatically filtered.

Configuring Filters for Backups

SP7

Multi-thread Folder Level Scan for NFS Shares

By default, multi-thread scan at the folder level is enabled if the subclient content contains NFS shares.

Previously, multi-thread scan at the folder level was not enabled by default for NFS shares.

Configuring the Number of Folders for Simultaneous Multi-thread Scan

SP7

Synthetic Full Backups on AIX

For Indexing V1 AIX clients, incremental backup jobs run automatically after synthetic full backup jobs even if the incremental backup jobs are not explicitly selected.

When a file or folder is renamed on AIX, the timestamp is not updated by the operating system. As a result, the backup job does not include the renamed items in the backup job. To overcome this limitation, the incremental backup job that runs after the synthetic full backup job, runs with the Verify Synthetic Full option selected by default, and backs up all items that were not included in backups in the previous cycle, including renamed items.

None

Archiving Agents

Introduced in

Product

Change

Learn more

SP15

OnePass for Hitachi NAS (BlueArc)

OnePass for BlueArc renamed to OnePass for Hitachi NAS.

SP14

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, Network Shares and NetApp, Macintosh

The "Select Only Files That Qualify For Archiving" Option Is Now Selected Automatically With Job-Based Retention

When you select the Job-Based Retention mode, the Backup files that qualify for archiving check box is now selected automatically for new subclients.

Previously, when you selected the Job-Based Retention mode, the Backup files that qualify for archiving check box was not selected automatically.

SP13

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, Network Shares and NetApp, Macintosh

Backup Nodes Tab Has Been Renamed to Data Access Nodes

The Backup Nodes tab in the File System Properties dialog box and the Subclient Properties dialog box is now renamed to Data Access Nodes.

SP13

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp, Macintosh

Overwrite if file on media is newer Option Is Now Renamed

The Overwrite if file on media is newer option is now renamed to Overwrite if file in backup is newer in the Restore Options dialog box.

SP13

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp, Macintosh

Synchronize Source files and Backup Index Is Now Renamed Reconcile Backup

The Synchronize Source files and Backup Index check box is now renamed to Reconcile Backup in the Subclient Properties dialog box.

SP12

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp, Macintosh

Retention Options Are Now Categorized Based on Job-Based Retention and Object Based Retention

The labels for subclient retention options are now more accurate and easier to understand. While the labels for the options have changed, the functionality remains the same. To provide more granular support, retention options are now divided into Job Based Retention and Object Based Retention and are labeled as follows:

  • Archiver Retention is now Job Based Retention

    • Extend retention for n years n months and n days is now Retain Jobs for n years, n months, and n days.

    • Extend retention indefinitely is now Retain Jobs Indefinitely.

  • Backup Retention is now Object Based Retention

    • After deletion keep items for N years N months N days is now Retain objects for n years in the Deleted item retention area.

    • After deletion keep items indefinitely is now Retain objects indefinitely in the Deleted item retention area.

  • Archive Retention and Backup Retention mode is now Object Based Retention and selecting the Minimum retention based on file modification check box.

SP12

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp, Macintosh

Configuring Days Based Retention Option Has Changed

To configure the days-based retention feature, you must now select the Job Based Retention option on the Retention tab of the Subclient Properties dialog box.

Previously, you selected the Archiver Retention check box to configure the days-based retention feature.

SP12

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp, Macintosh

Synchronizing Data On the Disk and the Index

For the synchronization operation, you must specify a minimum value of 7 days else the value is automatically set to a default value of 30 days.

Previously, you could specify a value less than 7 days.

SP11

OnePass for NetApp

Listener Port Number

The proxy computer uses listener port 10200 to communicate with the NetApp file server for C-mode recall operations.

Previously, we used port 9101 to communicate with the NetApp file server.

SP11

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Full Backups for OnePass Subclients

For Indexing V2 client computers, if you select the Extend storage policy retention check box for a subclient, then the full backup operation automatically converts to an incremental backup operation with a reference time set to 0.

For Indexing V1 client computers, if you select the Enable OnePass check box, then the full backup operation automatically converts to an incremental backup operation with a reference time set to 0.

The first backup operation on a new subclient is always a full backup operation.

To start a new backup cycle, you must schedule synthetic full backups.

Previously, for the same scenario, full backup operations were converted to an incremental backup operation, and subsequently followed by a synthetic full backup operation.

SP11

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Files Are Not Backed Up and Archived in the Same Job

Files that meet the archiving rules and that are eligible to be stubbed are backed up first, and then archived in a subsequent archive operation.

Previously, files that met the archiving rules and that were eligible to be stubbed were archived and backed up in the same operation.

SP10

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Synchronizing Data on the Disk and the Index

By default, the data is synchronized every time you run an incremental job after a synthetic full backup even if the Synchronize Source files and Backup Index check box is not selected in the Subclient Properties dialog box.

Previously, the synchronization job ran only if the Synchronize Source files and Backup Index check box was selected.

SP8

OnePass for Exchange Mailbox (Classic)

Recalling Messages Using the Quick Look Button Requires the User to Log On to the Web Console

When end users click the Quick Look button to recall messages, they must log on to Web Console before they can view the recalled messages.

Previously, end users were not required to log on to the Web Console before the recalled messages appeared in the browser.

In addition, when you upgrade from version 9 or version 10 of the Commvault software, you no longer need to configure the sEnableEmailPreviewV2 additional setting in order for your end users to see previews of recalled messages.

Accessing Archived Messages Using Exchange Server 2013 or Later

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Enable Archiving at the Agent Level

You can no longer select the Enable for archiving check box at the File System Properties dialog box to enable archiving at the agent level. You can now enable OnePass at the subclient level directly when you select the Extend storage policy retention check box in the Advanced Subclient Properties dialog box.

Previously, you selected the Enable for archiving check box at the File System Properties dialog box to enable archiving at the agent level. If this option was not selected, the Enable OnePass check box was not displayed at the subclient level.

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Enable OnePass Option for Your Subclient

You can no longer use the Enable OnePass option to archive the data on your subclient. To enable OnePass for your subclient, you must now select the Extend storage policy retention check box in the Advanced Subclient Properties dialog box.

Previously, you used the Enable OnePass check box to enable OnePass for your subclient.

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Labels for Retention Options Are Changed

The retention options now more accurately represent the subclient retention and is easy to understand. While the labels for the options have changed, the functionality remains the same. To provide more granular support, retention options are now divided separately as Archiver Retention and Backup Retention and are labeled as follows:

  • Archiver Retention

    • Use storage policy copy retention after n days is now Extend retention for n years n months and n days.

    • Retain Indefinitely is now Extend retention indefinitely.

  • Backup Retention

    • Use storage policy copy retention after n days for deleted files is now After deletion keep items for n years n months n days.

    • Retain Indefinitely for deleted files is now After deletion keep items indefinitely.

    • Use storage policy retention is now After deletion keep items for n years n months and n days where n = 0 days.

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Labels for Additional File Versions Options Are Changed

The retain additional versions options are now more accurate and granular. While the labels for the options have changed, the functionality remains the same.

  • Keep at least n previous versions is now Keep n versions.

  • Keep older versions for n years n months, and n days is a new option that allows you to retain the older versions for a specified time period. You can see this option only if your client computer uses Indexing version 2.

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Configuring Days Based Retention Option Has Changed

To configure the days-based retention feature, you must now select the Archiver retention check box on the Retention tab of the Subclient Properties dialog box.

Previously, you selected the Honor Archiver Retention check box to configure the days-based retention feature.

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Retention Options Behavior Change

For deleted items, the retention time is calculated based on modification time and deletion time of the items.

When the modification time and deletion time are met, the next synthetic full backup drops the deleted items.

Previously,for deleted items, the retention time period was calculated from the date when an incremental backup was performed after the item is deleted.

Retention Options for OnePass: Transitioning from a Previous Version

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Retaining Older Versions of Files and Stubs Behavior Change

After you upgrade to Service Pack 7, the following behavior changes occur for retention:

  • If you did not select the Keep at least N previous versions check box in the Disk cleanup tab and the files did not meet the retention criteria, the latest version of the data is retained after the next synthetic full backup.

    Previously, all data stub pairs and the latest version of the data were retained.

  • If the Keep at least N previous versions check box is selected in the Disk cleanup tab and the files did not meet the retention criteria, the latest N +1 versions are retained after the next synthetic full backup.

    Previously, all data stub pairs and the latest N versions of the data were retained.

Retention Options for OnePass: Transitioning from a Previous Version

SP7

OnePass for Windows, UNIX, Hitachi NAS (BlueArc), Celerra, and NetApp

Check for Deleted Stubs

The Check for Deleted Stubs option is no longer available in the Advanced Backup Properties dialog box. However, you can still see the option for an upgraded client that uses indexing version 1.

Previously, you could enable the Check for Deleted Stubs option to check for deleted or moved stubs.

SP7

OnePass for Hitachi NAS (BlueArc) and Network Shares

Stub and File Sizes During Browse

Starting Service Pack 7, for newly archived files, you can now view the accurate size of files when you perform a browse operation in the CommCell Console.

For upgraded subclients, for files that was archived prior to Service Pack 7, you still cannot view the accurate size of files when you perform a browse operation. When you click the View all versions option during browse, you can view only the stubs and the stub size is shown as 1234 bytes.

Previously, the data and stub were shown as separate versions when you clicked the View all versions option during a browse operation. The size of data version was the original size of the files and the size of stub version was shown as 1234 bytes.

Distributed Apps

Introduced In

Product

Change

Learn More

SP14

Cassandra

GPFS

Greenplum

Hadoop

Upgrade Data Access Nodes to Service Pack 14 When You Upgrade the CommServe Computer

For new subclients, after you upgrade your CommServe computer to Service Pack 14,you must upgrade the Data Access Nodes to the same Service Pack version. For older subclients, the data access nodes and CommServe computer can be at different service levels.

Previously, the Data Access Nodes and CommServe computer could be at different service pack versions.

Backup Error: All device streams configured to this Storage Policy including their multiplexing factor are in use. If the issue persists, please upload log files from CommServe, MediaAgent and Client and contact your vendor's support hotline with the job ID of the failed job.

SP12

Cassandra

GPFS

Greenplum

Hadoop

Distributed Apps Renamed to Big Data Apps

The CommCell Console now uses the term Big Data Apps instead of Distributed Apps.

The IBM Spectrum Scale (GPFS) and Hadoop client administration are now under File System.

Apache Cassandra Database Protection

Overview: IBM Spectrum Scale (GPFS)

Greenplum UNIX File System Agent Overview

Overview: Hadoop (HDFS)

Virtualization

Introduced in

Product

Change

Learn more

SP16

Virtual Server Agent (all hypervisors)

For Service Pack 16 and more recent service packs, the default number of data readers for new Virtual Server Agent subclients is 5 for all hypervisors.

For Service Pack 15 and earlier service packs, the default number of data readers was 2 for all hypervisors. For subclients that were created in Service Pack 15 or earlier service packs, the previous default value is retained when you upgrade to a later service pack.

Default Number of Data Readers for Subclients

SP14

Virtual Server Agent (VMware)

The Virtual Server Agent (VSA) for VMware supports VM-centric backup and restore operations using Indexing Version 2. (This feature is also called VSA V2 or VSA Indexing V2.)

In Commvault Version 11, Service Pack 14, this support is available for streaming and IntelliSnap backup operations performed using the Virtual Server Agent (VSA) with VMware.

For a new Commvault deployment, Indexing Version 2 is enabled by default, and new virtualization clients automatically use Indexing Version 2 to support VM-centric operations. For an existing Commvault deployment, you must enable Indexing Version 2.

VM-Centric Operations for Virtual Server Agent with VMware

SP13

Virtual Server Agent (VMware)

If the destination ESXi host for a restore operation is unavailable because the host is in maintenance mode or is disconnected, then the operation automatically selects a different host in the same cluster. You can disable this behavior, or you can extend the behavior to other scenarios.

If the specified destination host is not in a cluster or the host was removed from the inventory, then the restore operation does not select a different host, and the operation fails.

This is the default behavior for SP13 and more recent service packs.

You can extend this behavior to include in-place restore operations where the host or the datastore is not available.

Controlling Host and Datastore Substitution for Restores

SP12

Virtual Server Agent (VMware)

Prior to Service Pack 12, for backups of VMs from vCenter Server 6.5 or later, the Virtual Machine Status tab of the Job Details dialog box showed that VM backups were successful, even when applications could not be quiesced and a VM backup reverted to a crash-consistent backup. With Service Pack 12 and later, the VM backup status is shown as Partial Success if applications could not be quiesced. You can resolve the issue by ensuring that VMware Tools are installed and running on the guest VM, and then running a new backup.

Backups with Quiescing of the Operating System and Applications

SP12

Virtual Server Agent (Oracle VM)

Backups skip physical disks that are attached to a guest VM.

Backups for Oracle VM

SP11

Virtual Server Agent (Amazon, Microsoft Azure, Microsoft Azure Stack, Microsoft Hyper-V, and VMware)

You can only configure Live Sync replication for a subclient.

Configuration of Live Sync from backup sets is not supported.

Live Sync Replication for Virtual Machines

SP11

Virtual Server Agent (all hypervisors)

The Enable Granular Recovery option has been removed from the Advanced Backup Options dialog box for all hypervisors.

For hypervisors that support metadata collection, the following option is now available in the Subclient Properties dialog box:

  • Collect File Details: On the Backup Options tab, for streaming backups and backup copies.

  • Collect File Details for Snapshot Copy: On the IntelliSnap Operations tab, for IntelliSnap backups. Use this option when creating a copy for tape storage, because file and folder information is required when recovering files from tape.

Note

The options to collect file details are available only for virtualization clients that use Indexing Version 1.

By default, these options are not enabled for any subclients. This reduces the time required to perform backups.

You can use live browse to view and recover guest files without collecting file information during backups.

For upgraded clients, the settings for these options reflect the configuration of existing backup schedules for each subclient. If an existing backup schedule explicitly selected the Enable Granular Recovery option, the Collect File Details option is enabled for the corresponding subclient.

Live Browse, Block-Level Browse, and Metadata Collection

SP11

Virtual Server Agent (all hypervisors except Citrix Xen, Microsoft Hyper-V, and VMware)

By default, the option to perform a differential backup is not included in the Backup Options for Subclient dialog box.

To display the Differential backup option, add the bEnableVSADifferentialBackup additional setting to the CommServe system as shown in the following table.

Backup Types

SP9

Virtual Server Agent (all hypervisors except Docker))

When a live browse is performed, to access file and folder information for a virtual machine backup that does not contain metadata, the Virtual Server Agent is automatically installed to the MediaAgent that is used for the browse operation.

The remote installation restarts CVD services on the MediaAgent and does not check for running jobs being handled by the MediaAgent. As a result, jobs that were running on the MediaAgent might go Pending. After the install software job completes, any affected jobs restart automatically.

Live Browse, Block-Level Browse, and Metadata Collection

SP7

Virtual Server Agent (all hypervisors)

Schedule policies created for the Virtual Server Agent in Service Pack 7 or later will have the Enable Granular Recovery option selected by default for streaming backups and backup copies, and disabled by default for IntelliSnap backups.

Schedule policies created prior to Service Pack 7 will continue to use the original setting for the Enable Granular Recovery option, but the value shown in the Advanced Options dialog box might not accurately reflect the value that is actually used for backups.

Note

To ensure that the correct option is selected for schedule policies, verify the Enable Granular Recovery For Backup Copy setting on the Data tab of the Advanced Options for Backup Copy dialog box, and click OK even if you did not change the value.

SP7

Virtual Server Agent (VMware)

With Service Pack 7, the Virtual Server Agent includes VDDK 6.0.2 and VDDK 6.5, and loads the appropriate VDDK as required when the vCenter version is identified.

VDDK Support for the Virtual Server Agent

Snapshot Management

Introduced in

Feature

Change

Learn more

SP11

Proxy Selection for Network Share IntelliSnap Backup

For the EMC Isilon, EMC Unity, Hitachi NAS, and Huawei storage arrays, on the subclient properties dialog box and on the agent properties dialog box, there is a new tab called Backup Nodes. On the Backup Nodes tab, you can add data nodes that are the proxy computers for the subclient, or for the agent.

Proxy Computers for Network Share IntelliSnap Backup

SP8

nRunSnapRecon

Previously, you could use nRunSnapRecon additional setting to mark jobs for the missing snapshots as invalid. The nRunSnapRecon additional setting would reconcile all snapshots of all arrays on the CommCell.

Now, use the on-demand by reconciliation by clicking Reconcile Snaps in Array Management. Reconcile Snaps reconciles snapshots for individual arrays. "use the on-demand by reconciliation" doesn't make any sense.

Reconcile Snapshots

SP7

Proxy Selection for Network Share IntelliSnap Backup

Previously, you could select proxy computers on the Add Network Share Backup dialog box when you added a network share on the client computer.

Now, you select the proxy computers after you create the subclient, at agent level or the subclient level.

Proxy Computers for Network Share IntelliSnap Backup

Edge Backup and Access

Introduced in

Product

Change

Learn more

SP14

Laptop Backups (Windows, Macintosh, and UNIX)

Excluding Users as Client Owners

To exclude a user who installs the laptop package as a client owner, you will now blacklist a user group using the BlacklistUserGroup.sql qscript, and then add the user to that user group.

Previously, you will create a Laptop Admins user group, and then add the user to that user group.

Excluding Users as Client Owners

SP12

Laptop Backups (Windows, Macintosh, and UNIX)

Retention Options Are Now Categorized Based on Job-Based Retention and Object Based Retention

The labels for subclient retention options are now more accurate and easier to understand. While the labels for the options have changed, the functionality remains the same. To provide more granular support, retention options are now divided into Job Based Retention and Object Based Retention and are labeled as follows:

Backup Retention is now Object Based Retention

  • After deletion keep items for N years N months N days is now Retain objects for n years in the Deleted item retention area.

  • After deletion keep items indefinitely is now Retain objects indefinitely in the Deleted item retention area.

Retention for Laptop Backup

SP11

Laptop Backups (Windows, Macintosh, and UNIX)

Last Backup Time Value

In the Schedule Policies dialog box, if you specify the Minimum Interval between jobs, then a backup operation runs only if a new file is added to the subclient content or if a file changes. Even if no backup runs in the minimum time interval that you specify, the Last Backup Time value is updated on the Command Center, the Web Console, and the SLA Reports.

Previously, the Last Backup Time value was updated on the Command Center, the Web Console, and the SLA Reports only when backup operations ran.

Configuring Backup Based On Modified Files for Laptop Backup

SP11

Laptop Backups (Windows, Macintosh, and UNIX)

Full Backups for Laptop Subclients

For Indexing V2 client computers, if you select the Extend storage policy retention check box for a subclient, then a full backup operation automatically converts to an incremental backup operation with a reference time set to 0.

For Indexing V1 client computers, if you set a non-zero retention value for a subclient, then a full backup automatically converts to an incremental backup operation with a reference time set to 0.

To start a new backup cycle, you must schedule synthetic full backups.

Previously for Indexing V2 client computers, for the same scenario, a full backup operation automatically converts to an incremental backup operation, followed by a synthetic full backup.

Previously for Indexing V1 client computers, for the same scenario, full backup operations ran as full backup operations.

Retention for Laptop Backup

SP9

Web Console

For new installations, the forceHttps additional setting is enabled by default.

Configuring the SSL Connector for Tomcat Server

Self-signed certificates are automatically generated and installed for environments that are not already configured for HTTPS access.

Configuring Secured Access

If you access the Web Console using HTTPS, the recommendations for the second connector have changed.

Configuring the SSL Connector for Tomcat Server

SP9

Edge Drive

Administrators can now disable and enable Edge Drive backups for a specific user at the subclient level from the CommCell Console. Previously, you could disable and enable Edge Drive backup at the client level only.

Disabling and Enabling Edge Drive Backup for a User

SP7

Laptop Backups (Windows, Macintosh, and UNIX)

Labels for Retention Options Are Changed

  • Backup Retention

    • Use storage policy copy retention after n days for deleted files is now After deletion keep items for n years n months n days.

    • Retain Indefinitelyfor deleted files is now After deletion keep items indefinitely.

    • Use storage policy retention is now After deletion keep items for n years n months and n days where n = 0 days.

Retention for Laptop Backup

SP7

Laptop Backups (Windows, Macintosh, and UNIX)

Labels for Additional File Versions Options Are Changed

The retain additional versions options are now more accurate and granular. While the labels for the options have changed, the functionality remains the same.

  • Keep at least n previous versions is now Keep n versions.

  • Keep older versions for n years n months, and n days is a new option that allows you to retain the older versions for a specified time period.

Configuring Retention for Additional File Versions

Reports

Introduced in

Product

Change

Learn more

SP13

SLA Report on Web Console

Exclude Clients from SLA, Resubmit Jobs, and Send Log Files for a Job from the SLA Report

We updated the SLA Report to include options for excluding a client computer from the SLA calculation. You can also resubmit a failed job and send the log files for a particular job from the SLA Report on Web Console.

Previously, you had to log on to CommCell Console to perform any of these tasks.

SP13

Dashboard

Coloring and Tiles Updated in All Dashboards

We updated the coloring for every Dashboard.

In the Worldwide Dashboard, we added the Top 5 Errors in Last 24 Hours tile. In the CommCell Dashboard, we added in the CommCell Alerts tile. In the Company Dashboard, we also added the CommCell Alerts tile.

SP13

Custom Dashboard

Options Added to Custom Dashboard

You can now add any custom report component as a tile on your custom dashboard. You can also resize and save the tiles that appear, share your dashboard with other users, and configure the refresh interval for the data on your dashboard.

Custom Dashboard for Reports

SP9

  • Reports on CommCell Console

  • Reports on Web Console

Renamed Job Summary Report in CommCell Console and Added Access for SLA and Job Summary (Web) Reports

In the CommCell Console, we have added options to open the SLA Report and Job Summary (Web) Report in Web Console. Now, when you click the Job Summary button on the Reports tab, the Report Selection dialog box opens with the Job Summary (Web) page selected. The Job Summary page has been renamed Job Summary (CommCell Console) and all of the Job Summary Reports are still available.

Previously, the Job Summary Report page opened when you clicked the Job Summary button on the Reports tab.

Backup Job Summary Report - View Report

SP9

Private Metrics Reporting Server

New Metrics Reporting Server Installation Requirement

For new Metrics Reporting Server installations you must use a utility to install the self-signed certificate that is automatically created by the Commvault software. If you have CA-signed certificates installed, you can omit this step.

Previously, this was not required.

Installing the Metrics Reporting Server

SP9

  • Reports on Web Console

  • Reports on Cloud Services

Updated Information in Strike Count Report

Strikes for full and incremental backups are now displayed separately in the Strike Report. The new "Backup Type" column indicates whether the strike is a full or incremental backup.

Previously, the information in the report did not distinguish between full and incremental backup strikes.

SP8

Reports on CommCell Console

Audit Trail Report Deprecated

In the CommCell Console, we have deprecated the Audit Trail Report. The report is still functional, but we now recommend that you use the Audit Trail Report on Web Console.

Previously, the Audit Trail Report was only available on the CommCell Console.

Audit Trail Report (Web)

SP7

  • Reports on Web Console

  • Reports on Cloud Services

  • Build Your Own Reports

Appearance of Reports Application Updated

The layout and appearance of the Reports application on Web Console and Cloud Services has changed. We have added a navigation pane to the left side of the screen. Because of this change, steps for accessing the various reports and the Build Your Own Reports feature have changed.

Previously, you could access reports and the Build Your Own Reports feature using tabs along the top of the Reports application on Web Console and Cloud Services.

Solutions

When the change was made

Product

Change

Learn more

SP13

NFS ObjectStore

Change the NFS ObjectStore Location

You can change the location of the NFS ObjectStore cache on the MediaAgent.

Changing the NFS ObjectStore Cache Location

SP12

NFS ObjectStore

New Option for the create, create_snap and update_snap Command Line Operations

You can create or update a point-in-time share by specifying a secondary backup copy.

You can specify a copy precedence when you create a NFS ObjectStore.

create Man Page

create_snap Man Page

update_snap Man Page

SP10

Extended Tiered Storage Setup on Cloud Storage

New Filter for Reference Copy Subclient for Tiered Cloud Storage

You can now copy the files or stubs that were modified within a specific date range on the source subclient, by using the Modified Time filter.

Setting Up a Tiered Cloud Storage at the File Level

Analytics

Introduced in

Product

Change

Learn more

SP11

  • Index Server

  • Content Analyzer Cloud

Index Servers Are Now Under Index Server Smart Group in Client Computer Groups

In previous versions, the Index Servers were located in the CommCell Browser under the Index Servers node.

After SP11, the Index Servers are under the Client Computer Groups > Index Servers smart group.

Adding an Index Server Entity to Your CommCell Environment

Content Analyzer Clouds Are Now Under Content Analyzer Cloud Smart Group in Client Computer Groups

In previous versions, the Content Analyzer Clouds were located in the CommCell Browser under the Compute Servers node.

After SP11, the Content Analyzer Clouds are now under the Client Computer Groups > Index Servers smart group.

Configuring a Content Analyzer Cloud

SP9

  • Analytics Engine

  • Analytics Package

The Analytics Engine Is Now Index Server

In previous versions, the Analytics Engine was the CommCell entity that you configured to support indexing, search, and analytics functions for different Commvault products and features.

After SP9, the Analytics Engine is renamed to Index Server.

Index Server Overview

Analytics Configurations Moved to the Index Server Node

In previous versions, you configured the Analytics Engine from the Analytics Engine tab in the MediaAgent properties dialog box in the CommCell Console.

After SP9, there is a new Index Server node in the CommCell Browser that contains the configuration options for Index Servers.

Index Server Configurations

The Analytics Package Is Now Index Store

In previous versions, you installed the Analytics package to support Commvault products and features that required an Analytics Engine.

After SP9, the Analytics package is renamed to Index Store.

Considerations When Upgrading the Analytics Package to Index Store

SP8

Data Analytics

Analytics Jobs Are Now Run on Clients and Client Groups in the CommCell Console

Previously, you could run analytics jobs at the storage policy level.

After SP8, running analytics on a storage policy is not supported. Instead, you can run analytics from the client or client group level in the CommCell Console.

Data Analytics Jobs

Tools & Utilities

Introduced in

Product

Change

Learn more

SP12

Command Line:

  • Deleting a user

  • Deleting a user group

The XML template includes the elements required to transfer the ownership of entities. Transferring the ownership of entities is now required.

SP12

Acquirelock Workflow Activity

The Acquirelock activity now blocks workflows running on multiple workflow engines.

Control Flow

SP12

REST API: DELETE User Group

The syntax for the DELETE User Group API has changed. Transferring the ownership of entities is now required.

REST API - DELETE User Group

SP11

CommServDBQuery Workflow Activity

The Insert Variable option was removed from the SQL Batch tab of the CommServDBQuery workflow activity.

To use parameters in the SQL, designate parameters as question marks (?), and then use the Parameter tab to define the values.

Built-In Activities for Workflows: Utilities

SP11

REST API: POST View Shared Files and Folders

The API to view shared files and folders using the POST method is deprecated.

Use the GET method to perform the same operation.

View Shared Files and Folders (REST API: GET)

SP11

REST API: DELETE User

The syntax for the DELETE User API has changed. Transferring the ownership of entities is now required.

REST API - DELETE User

SP11

REST API: POST Operation Rule

The request body had changed.

Create an Operation Rule (REST API: POST)

SP10

REST API: Client License APIs

The syntax for the GET Client Licenses API and the POST Reconfigure Components API has changed:

  • GET /Client/{clientId}/License

  • POST /Client/License/Reconfigure

SP7

REST API: POST User

The response has changed for the following cases:

  • A user is created

  • A user already exists

REST API - POST User

Loading...