data center

0
359
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

What is a data center?

A data center — also known as a datacenter or data centre — is a facility composed of networked computers, storage systems and computing infrastructure that organizations use to assemble, process, store and disseminate large amounts of data. A business typically relies heavily on the applications, services and data contained within a data center, making it a critical asset for everyday operations.

Enterprise data centers increasingly incorporate facilities for securing and protecting cloud computing resources and in-house, on-site resources. As enterprises turn to cloud computing, the boundaries between cloud providers’ data centers and enterprise data centers become less clear-cut.

How do data centers work?

A data center facility, which enables an organization to collect its resources and infrastructure for data processing, storage and communications, includes the following:

  • systems for storing, sharing, accessing and processing data across the organization;
  • physical infrastructure for supporting data processing and data communications; and
  • utilities such as cooling, electricity, network security access and uninterruptible power supplies (UPSes).

Gathering all these resources in a data center enables the organization to do the following:

  • protect proprietary systems and data;
  • centralize IT and data processing employees, contractors and vendors;
  • apply information security controls to proprietary systems and data; and
  • realize economies of scale by consolidating sensitive systems in one place.

Why are data centers important?

Data centers support almost all computation, data storage, and network and business applications for the enterprise. To the extent that the business of a modern enterprise is run on computers, the data center is the business.

Data centers enable organizations to concentrate on the following:

  • IT and data processing personnel;
  • computing and network connectivity infrastructure; and
  • computing facility security.

What are the core components of data centers?

Elements of a data center are generally divided into the following primary categories:

  • Facility. This includes the physical location with security access controls and sufficient square footage to house the data center’s infrastructure and equipment.
  • Enterprise data storage. A modern data center houses an organization’s data systems in a well-protected physical and storage infrastructure along with servers, storage subsystems, networking switches, routers, firewalls, cabling and physical racks.
  • Support infrastructure. This equipment provides the highest available sustainability related to uptime. Components of the support infrastructure include the following:
    • power distribution and supplemental power subsystems;
    • electrical switching;
    • UPSes;
    • backup generators;
    • ventilation and data center cooling systems, such as in-row cooling configurations and computer room air conditioners; and
    • adequate provisioning for network carrier, or telecom, connectivity.
  • Operational staff. These employees are required to maintain and monitor IT and infrastructure equipment around the clock.

What are the types of data centers?

Depending on the ownership and precise requirements of a business, a data center’s size, shape, location and capacity may vary.

Common data center types include the following:

  • Enterprise data centers. These proprietary data centers are built and owned by organizations for their internal end users. They support the IT operations and critical applications of a single organization and can be located both on-site and off-site.
  • Managed services data centers. Managed by third parties, these data centers provide all aspects of data storage and computing services. Companies lease, instead of buy, the infrastructure and services.
  • Cloud-based data centers. These off-site distributed data centers are managed by third-party or public cloud providers, such as Amazon Web Services, Microsoft Azure or Google Cloud. Based on an infrastructure-as-a-service model, the leased infrastructure enables customers to provision a virtual data center within minutes.
  • Colocation data centers. These rental spaces inside colocation facilities are owned by third parties. The renting organization provides the hardware, and the data center provides and manages the infrastructure, including physical space, bandwidth, cooling and security systems. Colocation is appealing to organizations that want to avoid the large capital expenditures associated with building and maintaining their own data centers.
  • Edge data centers. These are smaller facilities that solve the latency problem by being geographically closer to the edge of the network and data sources.
  • Hyperscale data centers. Synonymous with large-scale providers, such as Amazon, Meta and Google, these hyperscale computing infrastructures maximize hardware density, while minimizing the cost of cooling and administrative overhead.

What is the infrastructure of a data center?

Small businesses may operate successfully with several servers and storage arrays networked within a closet or small room, while major computing organizations may fill an enormous warehouse space with data center equipment and infrastructure. In other cases, data centers can be assembled in mobile installations, such as shipping containers, also known as data centers in a box, which can be moved and deployed as required.

However, data centers can be defined by various levels of reliability or resilience, sometimes referred to as data center tiers. In 2005, the American National Standards Institute and the Telecommunications Industry Association published standard ANSI/TIA-942, “Telecommunications Infrastructure Standard for Data Centers,” which defines four tiers of data center design and implementation guidelines.

Tiers can be differentiated by available resources, data center capacities or uptime guarantees. The Uptime Institute defines data center tiers as follows:

  • Tier I. These are the most basic type of data centers, and they incorporate a UPS. Tier I data centers do not provide redundant systems but should guarantee at least 99.671% uptime.
  • Tier II. These data centers include system, power, and cooling redundancy and guarantee at least 99.741% uptime.
  • Tier III. These data centers provide partial fault tolerance, 72 hours of outage protection, full redundancy and a 99.982% uptime guarantee.
  • Tier IV. These data centers guarantee 99.995% uptime — or no more than 26.3 minutes of downtime per year — as well as full fault tolerance, system redundancy and 96 hours of outage protection.
There are a variety of items that can cause disruptions to data centers.

Beyond the basic issues of cost and taxes, sites are selected based on a multitude of criteria, such as geographic location, seismic and meteorological stability, access to roads and airports, availability of energy and telecommunications, and even the prevailing political environment.

Once a site is secured, the data center architecture can be designed with attention to the mechanical and electrical infrastructure, as well as the composition and layout of the IT equipment. All these issues are guided by the availability and efficiency goals of the desired data center tier.

How are data centers managed?

Data center management encompasses the following:

  • Facilities management. Managing the physical data center facility can include duties related to the real estate of the facility, utilities, access control and personnel.
  • Data center inventory or asset management. Data center facilities include the hardware assets, as well as software licensing and release management.
  • Data center infrastructure management. DCIM lies at the intersection of IT and facility management and is usually accomplished through monitoring of the data center’s performance to optimize energy, equipment and floor space use.
  • Technical support. The data center provides technical services to the organization, and as such, it must also provide technical support to enterprise end users.
  • Operations. Data center management includes day-to-day processes and services that are provided by the data center.
  • Infrastructure management and monitoring. Modern data centers use monitoring tools that enable remote IT data center administrators to oversee the facility and equipment, measure performance, detect failures and implement corrective actions without ever physically entering the data center room.
  • Energy consumption and efficiency. A simple data center may need less energy, but enterprise data centers can require more than 100 megawatts. Today, the green data center, which is designed for minimum environmental impact through the use of low-emission building materials, catalytic converters and alternative energy technologies, is growing in popularity.
  • Data center security and safety. Data center design must also implement sound safety and security practices, including the layout of doorways and access corridors to accommodate the movement of large IT equipment and employee access. Fire suppression is another key safety area, and the extensive use of high-energy electrical and electronic equipment precludes common sprinklers.

What is data center consolidation?

Modern businesses may use two or more data center installations across multiple locations for greater resilience and better application performance, which lowers latency by locating workloads closer to users.

Conversely, a business with multiple data centers may opt to consolidate data centers, reducing the number of locations in order to minimize the costs of IT operations. Consolidation typically occurs during mergers and acquisitions when the majority business doesn’t need the data centers owned by the subordinate business.

Data center vs. the cloud vs. a server farm: What are the differences?

How and where data is stored plays a crucial role in the overall success of a business. Over time, businesses have transitioned from simple on-site server farms and large enterprise data centers to cloud infrastructures.

The key differences among enterprise data centers, cloud service vendors and server farms include the following:

  • Enterprise data centers are designed for mission-critical businesses and are built with availability and scalability in mind. They offer everything required to maintain seamless business operations, including physical computer equipment and storage devices, as well as disaster recovery and backup
  • Cloud vendors enable users to purchase access to the cloud service provider’s resources without having to build or buy their own infrastructure. Customers can manage their virtualized or nonvirtualized resources without having physical access to the cloud provider’s facility.

    The main difference between a cloud data center and a typical enterprise data center is one of scale. Because cloud data centers serve many different organizations, they can be huge.

  • Server farms are bare-bones data centers. Many interconnected servers live inside the same facility to provide centralized control and easy accessibility. Even with cloud computing gaining popularity, many businesses still prefer server farms for reasons including cost savings, security and performance optimization. In fact, cloud providers also use server farms inside their data centers.

    Further blurring the lines between these platforms is the growth of hybrid cloud As enterprises increasingly rely on public cloud providers, they must incorporate connectivity between their own data centers and their cloud providers.

Google data center
Large enterprises like Google can require large data centers, like this Google data center in Douglas County, Ga.

Evolution of data centers

The origins of the first data centers can be traced back to the 1940s and the existence of early computer systems, like the Electronic Numerical Integrator and Computer, or ENIAC. These early machines, which were used by the military, were complex to maintain and operate. They required specialized computer rooms with racks, cable trays, cooling mechanisms and access restrictions to accommodate all the equipment and implement the proper security measures.

However, it was not until the 1990s, when IT operations started to expand and inexpensive networking equipment became available, that the term data center first came into use. It became possible to store all of a company’s necessary servers in a room within the company. These specialized computer rooms were dubbed data centers within the organizations, and the term gained traction.

Around the time of the dot-com bubble in the late 1990s, the need for internet speed and a constant internet presence for companies necessitated bigger facilities to house the large amount of networking equipment needed. It was at this point that data centers became popular and began to resemble the ones described above.

Learn which design best practices organizations are using to build green, sustainable data centers.

Source is ComputerWeekly.com

Vorig artikelTrump Media adds former Devin Nunes aides, Donald Jr. and “Apprentice” contestant as officers.
Volgend artikelElon Musk’s Twitter Deal Unlikely to Be Blocked by Regulators