Smart Homes, Internet of Things, Industry 4.0: The technologization of the professional and everyday life has resulted in increased demands on manufacturing IT infrastructure. Fail-safety and backup systems are becoming more and more relevant - because a power failure causes costs in the five-digit range within a very short time. As part of the "fast realtime" research project, Cloud&Heat Technologies has announced that it is optimizing data availability in case of an emergency. The Company has developed a system of cross-location data replication that ensures high availability thanks to the fastest transmission rates.
High availability through low latency
According to Gartner, about 4.9 billion devices already control production and production processes in order to plan them better and make them more cost- and time-efficient.
“Disruptions in the IT infrastructure cause delays of the production and supply chain - with disastrous consequences,“ says Nicholas Röhrs, CEO of Cloud&Heat Technologies. “This is why high availability of data is immensely important. Even compute-intensive jobs, such as those found in large research institutions like the CERN nuclear research center, require stable digital infrastructures. Every day huge amounts of experimental data are collected, analyzed and interpreted here. And they have to be available around the clock.“
Backup data centers as an emergency means
In order to prevent the emergency case and, thus, the temporary loss of important data, especially authorities or companies with an extensive IT operation and high availability requirements consider using an additional backup data center. The process is as follows:
· Additional Location - An additional data center can take over the entire IT operation, if the actual one fails.
· Increased Speed - In addition to the optimal location and the distance between the data centers, the speed with which the alternative data center takes over the tasks is of particular importance.
· Constant Synchronization - In addition, the data must be constantly synchronized at both locations.
· Low Latency - In order to provide them identically at different locations, low latency plays an important role. It ensures continuous data reconciliation, replication, and synchronized data storage. Latency refers to the technical time delay in networks that arises when the data packets pass through. Keeping the period of time very low ("low latency") is important in order to bring the IT infrastructure back to production as soon as possible in the event of an emergency.
Real-time synchronization only of relevant changes
The majority of replication programs always copy data completely, even if only a small portion of the file changes. Especially large files with small changes increase the network load enormously and above all unnecessarily.
"If a data center fails, the data to be replicated may not have fully synchronized at the backup site. Solving this problem is part of our current research“, explains Marius Feldmann of Cloud&Heat Technologies.
To this end, Cloud&Heat has developed a mechanism that first divides files into so-called blocks. If these are as small as possible, only a small amount of data has to be sent over the network in case of replication, which increases the speed of data transmission. Also, if the software detects a change, it only replicates the changed block.
"With short test intervals and the smallest possible amount of data, we can even reduce the times of conventional programs by an average of 30 to 70 percent," continues Feldmann.
Smart search for the best server location
In addition, Cloud&Heat has developed a geolocation mechanism that allows the user to select the server location that has the lowest latency. It is not only important to replicate as little data as possible across locations, but also to minimize the time delay in data transmission between the user and the cloud. The research goal is also to improve latency between sites. The time delay of transmission between the individual data repositories is minimal when the infrastructures of a data center are distributed over different locally operated smaller sites. The time needed to synchronize all existing data from Site A at Site B and secure it in the event of a failure of Site A can be reduced to a few milliseconds. When data is synchronized and stored in close proximity to the customer, data transfer also takes less time.
About fast realtime
Fast realtime is the basic project in the funded project “fast” (fast actuators, sensors and transceivers). Fast started in 2013 and is funded within the BMBF "Twenty20 - Partnership for Innovation". The program, which involves a total of 80 consortium partners, is dedicated to the future topic of innovative real-time systems. The project started on 01.02.2015 and will be completed on 31.01.2018.
About Cloud & Heat Technologies
Cloud&Heat is a provider of OpenStack-based public and private cloud solutions. Since 2012, the company has been operating its own cloud infrastructure distributed across different locations, offering classic cloud computing (IaaS). With the conception, commissioning and maintenance of tailor-made cloud solutions for companies, Cloud & Heat Technologies completes its portfolio with the Datacenter in a Box, responding to the rapidly increasing demand for in-house cloud infrastructures. All of our cloud solutions are based on Cloud & Heat's own "Data Center in a Box" server base, which is uniquely energy-efficient worldwide thanks to innovative hot-water cooling. The server waste heat is absorbed directly by the thermal hotspots such as CPU or RAM, discharged and can be used for heating homes and for hot water treatment. The energy and cost-efficient concept has won several awards, including the German Data Center Award 2015 and 2016.
# # #