Hardware Specifications for Deduplication Two Partitioned Mode

Deduplication Partitioned mode configuration uses multiple MediaAgents (two to four, in a grid) to host individual physical partitions of a larger logical Deduplication Database (DDB), one DDB per MediaAgent. This configuration is typically used to increase the amount of FET or BET a single DDB can manage. For details on supported platforms, see Building Block Guide - Deduplication System Requirements.

The following table provides the hardware requirements for MediaAgent hosting two partitions of the DDB in large and extra-large environments. For medium, small and extra small environments, partitioned mode is not recommended except when there is a requirement for resiliency (partition failover - when one of the partition is temporarily unavailable).

Terms that are used in the following Hardware Requirements:

  • Deduplication Node - MediaAgent hosting the DDB.

  • Grid - The collection of the deduplication nodes.

Important

  • The following hardware requirements are applicable for MediaAgents with deduplication. The requirements do not apply for tape libraries or MediaAgents without deduplication or using third party deduplication applications.

  • The suggested workloads are not software limitations, rather design guidelines for sizing under specific conditions.

  • The TB values are base-2.

  • To achieve the required IOPs, please consult your hardware vendor for the most suitable configuration for your implementation.

  • The index cache disk recommendation is for unstructured data types like files, VMs and granular messages. Structured data types like application, databases and so on need significantly less index cache. The recommendations given are per MediaAgent.

  • It is recommended to use dedicated volumes for index cache disk and DDB disk.

Number of Nodes, Backend, and CPU/RAM

Components

Extra large

Large

Number of Nodes in Grid

2

2

Grid Backend Storage2, 3

Up to 500 TB

Up to 300 TB

CPU/RAM per Deduplication Node

16 cores, 128 GB (or 16 vCPUs/128 GB)

12 cores, 64 GB (or 12 vCPUs/64 GB)

Disk Layout per Deduplication Node

Components

Extra large

Large

OS or Software Disk

400 GB SSD class disk

400 GB usable disk, min 4 spindles 15K RPM or higher OR SSD class disk

DDB Disk1

2 TB SSD Class Disk/PCIe IO Cards4

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.9 See Building Block Guide - Deduplication Database

1.2 TB SSD Class Disk/PCIe IO Cards4

2 GB Controller Cache Memory

For Linux, the DDB volume must be configured by using the Logical Volume Management (LVM) package.9 See Building Block Guide - Deduplication Database

Suggested IOPS for each DDB Disk

20K dedicated Random IOPS5

15K dedicated Random IOPS5

Index Cache Disk1, 7, 8

2 TB SSD Class Disk4, 6

1 TB SSD Class Disk4, 6

Suggested Workloads for Grid

Components

Extra large

Large

Parallel Data Stream Transfers

400

300

Laptop Clients for Grid

Up to 10000 per Grid

Up to 5000 per Grid

Front End Terabytes (FET) Range per Grid

120 TB to 200 TB

100 TB to 160 TB

Network Backups for Grid

  • 200 TB FET Files (includes OnePass for Files)

  • 160 TB FET for multiple Virtual Machines (VMs) with Virtual Server Agent (VSA)

  • 120 TB FET for databases or applications

Note

Combination of above data types not to exceed 150 TB FET

  • 160 TB FET Files (includes OnePass for Files)

  • 120 TB FET for multiple VMs with VSA

  • 80 TB for databases or applications

Note

Combination of above data types not to exceed 120 TB FET

LAN-Free Backups for Grid

  • 80 TB FET VMs with one VSA on each deduplication node

  • 80 TB FET mixed network backup including VMs with VSA

  • 80 TB FET with one Proxy for IntelliSnap on each deduplication node

  • 80 TB FET of mixed network backups

  • 80 TB FET for VMs with one VSA on each deduplication node

  • 40 TB FET for mixed network backup including VMs with VSA

  • 80 TB FET with one Proxy for IntelliSnap on each deduplication node

  • 40 TB FET for mixed network backups

Supported Targets

Components

Extra large

Large

Tape Drives

tick

tick

Disk Storage without Commvault Deduplication

Not Recommended

Not Recommended

Deduplication Disk Storage

Up to 500 TB

Direct Attached (OR) NAS

Up to 300 TB

Direct Attached (OR) NAS

Third-Party Deduplication Appliances

Not recommended

Not Recommended

Cloud Storage

tick

tick

Deploying MediaAgent on Cloud / Virtual Environments

Yes, for AWS or Azure Sizing, see the following guides:

Yes, for AWS or Azure Sizing, see the following guides:

Footnotes

  1. It is recommended to use dedicated volumes for index cache disk and DDB disk.

  2. Maximum size per DDB.

  3. Assumes standard retention of up to 90 days. Larger retention will affect FET managed by this configuration, the back end capacity remains the same.

  4. SSD class disk indicates PCIe based cards or internal dedicated endurance value drives. We recommend to use MLCs (Multi-Level Cells) class or better SSDs.

  5. Recommend dedicated RAID 1 or RAID 10 group.

  6. Recommendation for unstructured data types like files, VMs and granular messages. Structured data types like applications and databases require considerably less index cache.

  7. To improve the indexing performance, it is recommended that you store your index data on a solid-state drive. The following agents and cases require the best possible indexing performance:

    • Exchange Mailbox Agent

    • Virtual Server Agents

    • NAS filers running NDMP backups

    • Backing up large file servers

    • SharePoint Agents

    • Ensuring maximum performance whenever it is critical

  8. The index cache directory must be on a local drive. Network shares are not supported.

  9. For Linux, host the DDB on LVM volumes. This helps DDB Backups by using LVM software for snapshots. It is recommended to use thin provisioned LV for DDB volumes to get better Query-Insert performance during the DDB backups.

Tuning Performance When Using a Partitioned Deduplication Database

Loading...