Hardware Specifications for Deduplication Four Partitioned Mode

Deduplication Partitioned mode configuration uses multiple MediaAgents (two to four, in a grid) to host individual physical partitions of a larger logical Deduplication Database (DDB), one DDB per MediaAgent. This configuration is typically used to increase the amount of FET or BET a single DDB can manage.

The following table provides the hardware requirements for MediaAgent hosting four partitions of the DDB in large and extra-large environments. For medium, small and extra small environments, partitioned mode is not recommended except when there is a requirement for resiliency (partition failover - when one of the partition is temporarily unavailable).

For details on supported platforms, see Building Block Guide - Deduplication System Requirements.

Terms that are used in the following Hardware Requirements:

  • Deduplication Node - MediaAgent hosting the DDB.

  • Grid - The collection of the deduplication nodes.

Important

  • The following hardware requirements are applicable for MediaAgents with deduplication. The requirements do not apply for tape libraries or MediaAgents without deduplication or using third party deduplication applications.

  • The suggested workloads are not software limitations, rather design guidelines for sizing under specific conditions.

  • The TB values are base-2.

  • To achieve the required IOPs, please consult your hardware vendor for the most suitable configuration for your implementation.

  • The index cache disk recommendation is for unstructured data types like files, VMs and granular messages. Structured data types like application, databases and so on need significantly less index cache. The recommendations given are per MediaAgent.

  • It is recommended to use dedicated volumes for index cache disk and DDB disk.

Number of Nodes, Grid Backend Storage, and CPU/RAM

Components

Extra large

Large

Number of Nodes in Grid

4

4

Grid Backend Storage2, 3

Up to 1000 TB

Up to 600 TB

CPU/RAM per Deduplication Node

16 cores, 128 GB (or 16 vCPUs/128 GB)

12 cores, 64 GB (or 12 vCPUs/64 GB)

Disk Layout per Deduplication Node

Component

Extra large

Large

OS or Software Disk

400 GB SSD class disk

400 GB usable disk, min 4 spindles 15K RPM or higher OR SSD class disk

DDB Disk1

2 TB SSD Class Disk/PCIe IO Cards4

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.9 See Building Block Guide - Deduplication Database

1.2 TB SSD Class Disk/PCIe IO Cards4

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.9 See Building Block Guide - Deduplication Database

Suggested IOPS for each DDB Disk

20K dedicated Random IOPS5

15K dedicated Random IOPS5

Index Cache Disk1, 7, 8

2 TB SSD Class Disk4, 6

1 TB SSD Class Disk4

Suggested Workloads for Grid

Component

Extra large

Large

Parallel Data Stream Transfers

400

300

Laptop Clients for Grid

Up to 20000 per Grid

Up to 10000 per Grid

Front End Terabytes (FET) Range per Grid

240 TB to 400 TB

200 TB to 320 TB

Network Backups for Grid

  • 400 TB FET Files (includes OnePass for Files)

  • 320 TB FET for multiple Virtual Machines (VMs) with Virtual Server Agent (VSA)

  • 240 TB FET for databases or applications

Note

Combination of above data types not to exceed 300 TB FET

  • 320 TB FET Files (includes OnePass for Files)

  • 240 TB FET for multiple VMs with VSA

  • 160 TB for databases or applications

Note

Combination of above data types not to exceed 240 TB FET

LAN-Free Backups for Grid

  • 160 TB FET VMs with one VSA on each deduplication node

  • 160 TB FET mixed network backup including VMs with VSA

  • 160 TB FET with one Proxy for IntelliSnap on each deduplication node

  • 160 TB FET of mixed network backups

  • 160 TB FET for VMs with one VSA on each deduplication node

  • 80 TB FET for mixed network backup including VMs with VSA

  • 160 TB FET with one Proxy for IntelliSnap on each deduplication node

  • 80 TB FET for mixed network backups

Supported Targets

Component

Extra large

Large

Tape Drives

tick

tick

Disk Storage without Commvault Deduplication

Not Recommended

Not Recommended

Deduplication Disk Storage

Up to 1000 TB

Direct Attached (OR) NAS

Up to 600 TB

Direct Attached (OR) NAS

Third-Party Deduplication Appliances

Not recommended

Not Recommended

Cloud Storage

tick

tick

Deploying MediaAgent on Cloud / Virtual Environments

Yes, for AWS or Azure Sizing, see the following guides:

Yes, for AWS or Azure Sizing, see the following guides:

Footnotes

  1. It is recommended to use dedicated volumes for index cache disk and DDB disk.

  2. Maximum size per DDB.

  3. Assumes standard retention of up to 90 days. Larger retention will affect FET managed by this configuration, the back end capacity remains the same.

  4. SSD class disk indicates PCIe based cards or internal dedicated endurance value drives.

  5. When multiple DDBs are on the volume, each DDB needs dedicated IOPs. IOPs might be limited by SAN controller even though SSD drives are used.

  6. Recommendation for unstructured data types like files, VMs and granular messages. Structured data types like applications and databases require considerably less index cache.

  7. To improve the indexing performance, it is recommended that you store your index data on a solid-state drive (SSD). The following agents and cases require the best possible indexing performance:

    • Exchange Mailbox Agent

    • Virtual Server Agents

    • NAS filers running NDMP backups

    • Backing up large file servers

    • SharePoint Agents

    • Ensuring maximum performance whenever it is critical

  8. The index cache directory must be on a local drive. Network drives are not supported.

  9. For Linux, host the DDB on LVM volumes. This helps DDB Backups by using LVM software for snapshots. It is recommended to use thin provisioned LV for DDB volumes to get better Query-Insert performance during the DDB backups.

Tuning Performance When Using a Partitioned Deduplication Database

Loading...