How to migrate Google Workspace backup data to a new storage location.


In some cases, you may want to move your Google Workspace backups to a different location, especially when the current backup storage is nearly full.

Note:
Depending on the amount of data being backed up, data migration can take a considerable amount of time. Please note no new backups will take place until all data has been migrated to the new location.
If it is absolutely necessary to perform a backup, the data migration must be cancelled and started again from the beginning after the backup is finished.

Starting from version 4.7.5, CubeBackup supports migrating to a new storage location directly through the web console. If you'd like to initiate a migration process in CubeBackup, please ensure that you have upgraded to the latest version of CubeBackup first, and then follow the instructions below to set up your data migration.

Start the migration

  1. Log in to the CubeBackup web console as a system administrator.

  2. On the OVERVIEW page of the CubeBackup web console, find the Storage status section at the bottom right, and click the gear icon to open the update wizard. Press the Migrate backups button.

  3. The CubeBackup Migration wizard will pop up. Click Next to begin the configuration.

    Note: this operation will stop the current backup or restore process.

  4. As a safety precaution, an authentication code will be emailed to you. Please type in the code to continue.

  5. Step 2 allows you to set up the new storage location. CubeBackup supports migrating to a local disk, NAS/SAN, Amazon S3, Google Cloud storage, Microsoft Azure Blob storage, and Amazon S3-compatible storage. Please click the corresponding tab below for detailed information on each of these options.

Migrate to local storage

Storage type: Select Local disk from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

Backup path: Select a local directory for the Google Workspace backup data.

Note: CubeBackup on Linux runs as the "cbuser" and needs full permissions to read, write and execute in the storage directory. If the storage destination is a subdirectory, please also ensure that "cbuser" has at least "x" permission to all levels of its parent directory.

When all information has been entered, click the Next button.

Migrate to NAS on Linux

Storage type: Select Mounted network storage from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

Network storage path: Select a mounted network location as the backup storage.

Note: CubeBackup on Linux runs as the "cbuser" and needs full permissions to read, write and execute in the storage directory. If the storage destination is a subdirectory, please also ensure that "cbuser" has at least "x" permission to all levels of its parent directory.

When all information has been entered, click the Next button.

Migrate to NAS on Windows

Storage type: Select Windows network location from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

Network storage path: If CubeBackup is installed on a Windows operating system using network storage, the network storage path and access credentials are required in this step.

Manually enter the UNC path for the remote storage, e.g., \\NAS-HOSTNAME\gsuite_backup, or \\192.168.1.123\gsuite_backup. Generally, the hostname is preferred over IP addresses, especially in an Active Directory domain environment.

Notes: Network resource drive letter mapping is not currently supported. Please use UNC paths (\\NAS-HOSTNAME\backup\gsuite) instead of mapped paths (Z:\gsuite).

User and password: The username and password to access the network storage are required.

  • For Windows networks using Active Directory, the preferred user name format is <DomainName>\<UserName>. For example: cubebackup\smith ( [email protected] is not supported).
  • For Windows networks organized by workgroup, or if the network storage is located outside of your active directory, the format should be <NASHostName>\<UserName>. For example: backup_nas\smith.

Why are a username and password required?
CubeBackup runs as a service using the system default local service account, which does not have rights to access network resources. This is by design in Windows. In order for CubeBackup to access network storage, a username and password must be supplied.

When all information has been entered, click the Next button.

Migrate to Amazon S3

Storage type: Select Amazon S3 from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

S3 Bucket: Before you can backup Google Drive, Shared drives, Contacts, Calendar and Sites data to Amazon S3, you will first need to create and configure an private Amazon S3 bucket following the instructions below or watching the demo:

  1. Create an Amazon AWS account

    If your company has never used an Amazon AWS service, like Amazon EC2 or Amazon S3, you will need to create an Amazon AWS account. Please visit Amazon AWS, click the Create an AWS Account button, and follow the instructions.

    If you already have an AWS account, you can sign in directly using your account.

  2. Create Amazon S3 bucket

    Amazon S3 (Amazon Simple Storage Service) is one of the most-widely used cloud storage services in the world. It has been proven to be secure, cost-effective, and reliable. Amazon S3 stores data as objects within buckets. Each object consists of a file and attached metadata. Buckets are configurable containers for any data objects, at specific geographic regions, with controlled access and detailed access logs.

    To create your S3 bucket for Google Workspace backup data, please follow Amazon's official instructions at: https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html

    After the bucket has been successfully created, you can change the default configuration on the "Properties" page. You may wish to enable "Server access logging" or "Default encryption", depending on your company policies, but these options are not necessary for the operation of CubeBackup.

    It is strongly recommended that you create a separate bucket only for CubeBackup.

  3. Create an IAM account

    AWS IAM (Identity and Access Management) is a web service that helps you securely control access to AWS resources. The IAM account will be used to control access to the S3 bucket.

    Instead of defining permissions for the IAM account directly, it is more convenient to create a group with predefined policies and then assign the IAM user to that group.

    Here are a few brief instructions for creating an IAM for CubeBackup:

    1. Open the IAM console at https://console.aws.amazon.com/iam/
    2. In the navigation pane, choose "Users" and then choose Add user.
    3. Enter a User name for the new user. (e.g. CubeBackupS3), and click Next.
    4. On the Set permissions page, click Create group.
    5. On the Create user group dialog, enter a User group name (e.g. S3Access) and check AmazonS3FullAccess policy, then click Create user group.
      Tip: If you want to create an IAM account which only has permissions on the newly created S3 bucket (not the "AmazonS3FullAccess" policy), please refer to this doc.
    6. Back on the Set permissions page, make sure the newly created group is checked, then click Next.
    7. Click Create user, and choose the name of the intended user in the user list.
    8. Choose the Security credentials tab on the user detail page. In the Access keys section, choose Create access key.
    9. On the Access key best practices & alternatives page, choose Application running outside AWS, then choose Next.
    10. Set a description tag value for the access key if you wish. Then choose Create access key.
    11. On the Retrieve access keys page, choose either Show to reveal the value of your user's secret access key, or Download .csv file. This is your only opportunity to save your secret access key. You will need the Access key and Secret access key values for the next step.

In step 2 of the CubeBackup wizard, you can now enter the name of your Amazon S3 bucket and copy the Access key ID and Secret access key values into the corresponding textboxes.

For detailed information about creating IAM accounts, please visit: AWS IAM account Guide

Storage class: Select an Amazon S3 storage class for the backup data. Standard-IA or One Zone-IA is recommended.

For more information about Amazon S3 storage classes, please visit AWS Storage classes. You can find the pricing details for the different S3 storage classes at S3 pricing.

When all information has been entered, click the Next button.

Migrate to Google cloud storage

Storage type: Select Google Cloud storage from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

Bucket: Before you can backup data to Google Cloud storage, you will first need to create and configure a private Google Cloud Storage bucket using the following steps:

  1. Log in to Google Cloud Platform (GCP).
  2. Select the project in which you created the service account for CubeBackup during the initial configuration.

    Tip: You can select the project in the project drop-down list at the top of the page to make it the active project. The project name is CubeBackup by default.

    Active Project

  3. Create a Google Cloud Storage bucket.

    • Select STORAGE > Cloud Storage > Browser from the navigation menu.

    Tip: The navigation menu slides out from the left of the screen when you click the main menu icon in the upper left corner of the page.

    • In the Cloud Storage Browser page, click CREATE BUCKET.
    • In the Create a bucket page, input a name for the bucket, and click CONTINUE.
    • Choose a location type for the bucket (Region or Dual-region is recommended), then select a location for the bucket, and then click CONTINUE.

    Tips:
    1. Please select the location based on the security & privacy policy of your organizations. For example, for EU organizations, you may need to select Europe to be in accordance with GDPR.
    2. Select a location the same as or near to the location of your Google Compute Engine VM.

    • Choose a default storage class for the backup data, Coldline is recommended, then click CONTINUE.
    • Select Uniform as the Access control type, and click CONTINUE.
    • Additional options should be left as default. Then click CREATE.

    Tip: Since CubeBackup constantly overwrites the SQLite files during each backup, enabling the Object versioning or Retention policy would lead to unnecessary files duplication and extra costs.

Storage class: The storage class for the backup data. Coldline is recommended. For more information about Google Cloud storage classes, please visit Storage classes. You can find the pricing details for the different Google Cloud storage classes at Cloud Storage Pricing.

When all information has been entered, click the Next button.

Migrate to Azure cloud storage

Storage type: Select Azure Blob storage from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

Storage account: Your Azure Storage Account.

Access key: The Access Key to your Storage Account.

Container: The container created in your Azure Storage Account.

For more information about the Azure Blob storage, the storage account, and the container, please visit Introduction to Azure Blob storage.

Access tier: The Access Tier for Azure Blob Storage. Cool is recommended.

For more information about Azure Blob Storage Access tiers, see this. You can find the pricing details for the different Azure Storage Cloud access tiers classes from here.

If you are an experienced Azure user, you may skip the instructions below. If you are new to Azure storage, please follow the instructions below or watch the demo to create a Storage account and a Container for Azure Blob Storage.

  • Create a storage account

    1. Log into the Microsoft Azure Portal using an Azure account with an active subscription. If you do not have an Azure account, sign up to get a new account.
    2. Select Storage Accounts from the left panel and click + Create.
    3. On the Basics tab, select the Subscription and Resource group in which you'd like to create the storage account.
    4. Next, enter a valid and unique name for your storage account.
    5. Select a Region for your storage account or simply use the default one.

      Note: Please select the location based on the security & privacy policy of your organizations. For example, for EU organizations, you may need to select Europe to be in accordance with GDPR.

    6. Select the Performance tier. Standard is recommended.

    7. Choose a Redundancy policy to specify how the data in your Azure Storage account is replicated. Zone-redundant storage (ZRS) is recommended. For more information about replication strategies, see Azure Storage redundancy.

    8. On the Data protection tab, uncheck Enable soft delete for blobs. Since CubeBackup constantly overwrites the SQLite files during each backup, enabling this option would lead to unnecessary file duplication and extra costs.

    9. Additional options are available under Advanced, Networking, Data protection and Tags, but these can be left as default.

    10. Select the Review + create tab, review your storage account settings, and then click Create. The deployment should only take a few moments to complete.

  • Get Access key
    To authenticate CubeBackup's requests to your storage account, an Access key is required.

    1. In the detail page of your newly created storage account, select Access keys under Security + networking from the left panel.
    2. On the Access keys page, click Show keys.
    3. Copy the access key from the Key text box of either key1 or key2 and paste it into the Access key textbox on the CubeBackup configuration wizard.

  • Create a new container

    1. In the detail page of your newly created storage account, click Containers under Data storage from the left panel.
    2. On the containers page, click + Container.
    3. Enter a valid Name and ensure the Public access level is Private (no anonymous access). You can leave the other Advanced settings as default.
    4. Click Create.

When all information has been entered in the configuration wizard, click the Next button.

Migrate to S3 compatible storage


CubeBackup supports AWS S3 compatible storage, such as Wasabi and Backblaze B2.

Storage type: Select Amazon S3 compatible storage from the dropdown list.

Data index path: For performance reasons, data index must be stored on a local disk. In most cases, there is no need to specify a new path for the data index.

Endpoint: The request URL for your storage bucket.

Bucket: Your S3 compatible storage bucket.

Access key ID: The key ID to access your S3 compatible storage.

Secret access key: The access key value to your S3 compatible storage.

Region: Certain very specific self-hosted S3 compatible storage may require you to manually enter the region. Wasabi and Backblaze users can ignore this section.

  1. In Step 3, please carefully confirm the new storage details, and then click Start migration.

  2. CubeBackup will begin copying your previous backup data to the new storage location. After the migration is complete, all subsequent backups will use the new storage location automatically.
    Note:
    1. If something should go wrong and the data migration were to fail, don't worry! All backup data and settings remain unchanged in the original location. CubeBackup will revert to the original location and settings.
    2. Even after a successful data migration, the original backup data remains untouched for security reasons. Once you have confirmed that CubeBackup is functioning properly using the new location, you may manually remove the data in the old location at your convenience.

Monitor the migration

Once the migration has begun, it is safe to close the browser. The migration process will continue to run in the background. Depending on the amount of data being migrated, this process may take a considerable amount of time. You can monitor the progress of the migration at any time by reopening the web console.

NOTE: The migration progress is only visible to system administrators. Any other restricted admin accounts or individual users with restricted permissions are temporarily blocked from accessing the web console.

CubeBackup will automatically send a notification to your mailbox once the migration is successful, or in the event of a failure. You will need to log in to the web console to confirm the migration status. For a failed migration, you can either abort the entire process or retry the unfinished data.

NOTE: If the migration has failed due to errors, clicking the Retry button will retry all failed files and resume the migration from where it left off.

CubeBackup also keeps a detailed record of the migration status of each file. You can find the log file at <installation directory>/log/migration.log .

On Windows, the default installation directory is C:\Program Files\CubeBackup4\ .
On Linux, the default installation directory is /opt/cubebackup/.

Cancel the migration

CubeBackup will not resume its regular backups until the data migration is complete; however, there may be cases where an urgent backup or restoration process request arises, and it is necessary to perform the request before continuing the migration. Or perhaps you need to pause the current migration temporarily, or even switch to a different target storage and start from scratch again.

  1. If you are still in the configuration wizard and have not yet started the migration, you can simply move back to Step 2 and click Cancel migration.
  2. If the migration process is already initiated, but you need to perform a backup or restoration in the web console immediately, you can use the Abort button to quit the current migration process.

    Later, if you wish to resume the migration, please initialize a second migration to the same storage location. CubeBackup will automatically skip any unchanged files that have already been moved to the new location, and continue the migration from where you left off.

  3. To switch to a different target storage location, you will need to click the Abort button, reopen the migration wizard <IP or domain name>/migrate, and initiate a completely new migration.