Auristor’s distributed file system looks like local NAS but reaches way beyond the single physical site, with data movement across global locations achievable rapidly and securely.
It is for customers that want to share files between users and locations but with a great deal more reassurance of data integrity, security and performance than is possible with public cloud services.
Those are the selling points for Auristor, a US-based software maker born at the end of the 2000s that brought a commercial version of Andrew File System (AFS) to market.
Like NFS and SMB, AFS allows multiple users to access file servers as easily as a local disk by navigating a hierarchy of directories.
But unlike its popular competitors, AFS works beyond the four walls of the local site, so when, for example, a company’s Australian office puts new documents on its server, offices in India, the UK or Canada can access them immediately too.
“The companies that use our product are those that no longer want to take files from their datacentres and copy them to the cloud so their international staff can access them,” said Jeffrey Altman, founder of Auristor, speaking to Computer Weekly’s French sister publication Le MagIT.
Altman spelt out the drawbacks of this way of working, which include: the cost of online storage services, with every access invoiced in addition to capacity occupied; the inertia inherent in accessing online services; a potentially anarchic multiplication of versions modified by numerous individuals; and the obvious lack of security inherent in downloaded copies of documents.
“Our solution offers the same ease of use and the same security as a local system of file sharing,” said Altman, adding that Auristor’s first customers were universities and public sector organisations in the US. Auristor later broadened its customer base to include airline giant KLM and researchers at CERN and Intel.
Auristor is not the only one to offer a route from Windows, Mac or Linux workstations to Posix-compliant files located elsewhere on the internet. There are also the likes of Nasuni and Ctera, but Altman is keen to distance Auristor from these products.
“We are not a proxy,” he said. “Our solution is tailored to large enterprise use cases. We provide performance similar to HPC [high-performance computing] file systems, with the highest levels of security included.”
Auristor’s file system is called AuristorFS. It is built on the open source distribution of OpenAFS, a distributed file system that runs on Windows and Macs. OpenAFS – which still exists as a Linux package – was donated to the community by IBM and enables collaboration by people in multiple locations working on the same files. Although the files can reside anywhere in the distributed system, they appear to users as local, while AuristorFS finds the correct file automatically.
A collection of file servers connected by AuristorFS is called a cell. To reference an AuristorFS server with a global domain name, all that is needed is the server’s DNS address in the enterprise.
For each cell, administrators can configure access rights to directories that are shared between physical server locations. The administrator can decide to replicate frequently accessed directories between several instances to accelerate speed of access in each geographic location.
Access to an AuristorFS cell is enabled by an agent on the user’s machine, which also acts as a cache. When a file is opened in a cell, it is downloaded to the user machine to avoid latency during incremental changes.
Key to the AuristorFS way of working is that the agent synchronises user modifications with the file contents on the server at regular intervals. In this way, if several users modify a file at the same time, each sees the others’ changes in real time, which avoids propagation of multiple versions.
“Obviously, office file sharing is a use case that comes to mind,” said Altman. “But that’s not what customers choose us for. Rather, it is to share large files and new datasets – mapping, video, etc – or internal applications in container format.”
Compared to OpenAFS, AuristorFS is much more rapid. In each CPU core, 16 processes respond simultaneously to read/write requests, compared with just one in OpenAFS and that boosts the speed of transfer by 3x or 4x. Auristor said its software is capable of using 64 processor cores, while OpenAFS can use only 16.
The latest version – designated 0.189 by Auristor – will further improve speed of execution.
AuristorFS is built for very large-scale workloads. Its cells can group together 2^64 client machines and servers and contain 2^90 files or directories. Each of its directories can hold two million files, compared with a little less than 65,000 on OpenAFS.
It is validated with Linux distros (Red Hat, Debian, Ubuntu, CentOS, for example) and on Linux offered by Oracle and AWS on virtual machines in their public clouds, including on ARM and Power versions.
AuristorFS is equally good on client workstations. While OpenAFS only offers a Linux client – and Windows and macOS support hasn’t been updated since 2014 – Auristor offers an agent for macOS (Big Sur-compatible) and Windows (11-compatible). The agent is primarily tasked with synchronising contents almost immediately, while the equivalent function in OpenAFS runs every 10 or 15 minutes.
In the most recent version, the AuristorFS agent uses logical acceleration on the host processor (Intel, AMD and ARM) to encrypt and decrypt packets received and sent over the network.
On a Mac with a Core i5 CPU, transfer times are improved between 3x and 14x, according to the encryption demanded by the customer’s authentication software.
Pricing for Auristor starts from $21,000 a year. That provides for one AFS cell with four file servers, an unlimited number of clients, files and capacity and technical support for the year.