Knowledgebase: Filesystem related
Backup2Go recommendations
Posted by Andre Kuehnemund on 13 November 2013 17:29

On file systems without Hardware Snapshot capability, P5 creates and maintains the snapshots using soft and hard links. This includes for instance HFS on Mac, NTFS on Windows, ext4 on Linux. A lot of effort has been put into that implementation, but it is limited with regard to the total number of files the underlying file system can handle.

Such a data repository based on links requires additional internal work:

  • When snapshots are created, P5 has to go through the folders and create Links   
  • Nightly cleanup and reorganization is required, this is done in a procedure starting at 1 am.
As a result, the pure number of files in the Backup2Go repository may lead to high loads when there are too many snapshots and/or to many hosts.
There is no exact number of workstations that limits the process, also, there is no exact number of files as the parameters depend on the hardware.
That system is great for smaller solutions with up to 20-25 workstations, depending on how many files per workstation are saved and how often
backups are done.

On bigger installations, we highly recommend a repository on a file system that supports snapshots. Currently, Solaris with a ZFS file system and Linux with a BTRFS file system are supported. On these machine, the filesystem will create snapshots natively, using a technology called COW, copy on write. Such systems can handle more data and more files, as the folder structure does not require explicit maintenance.
On such systems, many more files can be maintained, so systems with up to  100 - 150 workstations may be possible.
Still, the I/O load and the network limitations to the Backup2Go server limit the total size and total number of files, and same as above, the values are hardware dependent.

When the server turns out to be too slow, even with hardware snapshots, we recommend to split the installation into multiple B2G servers.

In addition to the above, there are some hardware recommendations for Backup2Go servers:
  • use built-in disk drives instead of external drives with USB- or Firewire- Interface as Backup2Go repository storage, as the latency time of such drives breaks down the performance drastically.
  • Use local drives instead of storage with network interface like SMB or NFS as Backup2Go repository, not only because of the high latency time, but also as such drives require the LAN which is already loaded by the data transfers from the workstations.
  • also, using a virtual server as Backup2Go server may turn out to expose several hardware limitations, like for instance the network and hard disk I/O load. So we recommend using a dedicated host as Backup2Go server.

b2go backup2go repository snapshots

(2 vote(s))
Not helpful

Comments (4)
Johan Hellstrom
18 December 2013 9:52
Can you point us to a supported solution for running Linux with Btrfs?
According to the Btrfs wiki it is still considered "experimental"
Andre Kuehnemund
18 February 2014 11:27
We have tested btrfs on Ubuntu 12.04. As far as we can tell, it works. As officially btrfs continues to be 'experimental', it is subject to change, could potentially break etc. However, we have no control over that. So, while we can do our part to try to make it work with Presstore, we have no control over what happens to btrfs itself. In other words: Use at your own risk. Your mileage may vary.
Ludovic VERDOT
06 September 2016 12:44
BTRFS is still a technology preview.
Red Hat ships BTRFS but does not support it for use in production environments even with the version 7.

For a professional environment and a software support using a RedHat subscription, it seems difficult to use BTRFS.

Why do you still recommend using this filesystem technology ?
Andre Kuehnemund
28 December 2016 18:17
We are not forcing anyone to use BTRFS. What we're trying to do is to give you options. It's totally up to you whether you want to use it or not. As I wrote earlier: We tested it and it worked during our tests. Of course, every environment is different and what works for one may not work for someone else. Hence... your mileage may vary. Ubuntu 16.04 includes ZFS, so that would be another option. I'm not sure where RedHat stands as far as ZFS support. ZFS has been around for much longer and I would therefore expect it to be a more mature product.
Post a new comment
Full Name:
CAPTCHA Verification 
Please enter the text you see in the image into the textbox below (we use this to prevent automated submissions).