CONCEPTS

ASIC = Application Specific Integrated Circuit

High Performance Computing

Containerized Data Center

Natural Gas Generators – Natural Gas Power Plant – Green Energy

Water Cooling Towers – technology used

From AI and data analytics to high-performance computing (HPC) to rendering, data centers are key to solving some of the most important challenges. The end-to-end ABC accelerated computing platform, integrated across hardware and software, gives enterprises the blueprint to a robust, secure infrastructure that supports develop-to-deploy implementations across all modern workloads.

HPC is one of the most essential tools fueling the advancement of science in the data center. Optimizing over 700 applications across a broad range of domains, ABC ASICs are the engine of the modern HPC data center. By delivering breakthrough performance with fewer servers resulting in faster insights and dramatically lower costs, the ABC data center platform paves the way to scientific discovery.

Location

Containerized Data Center

Containerized data center is a concept that is one of its kind, innovative, beneficial, and equally challenging. The idea of making your data centers in a container was first introduced in 2007. And since then, numerous international companies have adopted this way of keeping their IT infrastructure well-organized, secure, and at easy access. With time, container data centers have become amazingly popular, and that has urged the providers to deliver a variety of cooling options, sizes, space, and layouts.

In simple words, we can explain containerization as a solution for the companies who lack experts and skilled staff to maintain the IT as it usually comes with ready-made plug and play units. It means no employees are required! It is a reasonable compensation to the brands lacking human labor.

Manufactured on standardized technology, containerized data centers need to be assembled by competent personnel. Well, most of the time, container data centers demand no extra hands for designing or assembly. It will keep your data maintained, organized, and flowing smoothly.

Pros of Choosing A Containerized Data Center?

Free up space

Shifting your complete IT setup to a container will leave you with more than enough space inside your building as you will not need tons of room for building a data center. You will be free to use your available valuable square footage for making more offices.

Easy to deploy

Apart from giving extra space, it offers easy maintenance. First, containerized data centers are generally made weather-resistant, so no need to worry about its damage. You can place them anywhere outside your company premises and even in underused spaces.

And, secondly, containers take less space although they are massive in size. How? Because you can easily stack them, and they won’t topple over. And, yes, they offer easy entry to all the containers separately through access doors that can be found on either side.

Faster scalability

We all know how essential scalability is for running your company smoothly! And containerized data centers bring easy scalability right at your door.

Need to set up more equipment? Need more storage space? Not a big deal! All you have to do is ADD SOME MORE CONTAINERS AND STACK THEM UP! It let you meet the all-time changing demands of the organization rapidly and effortlessly.

Made energy efficient

The most highlighted advantage is its energy-saving feature. They are made energy-efficient, so you can save the cost regarding ongoing operational expenses. Its unique modular design comes with all integrated power systems and cooling equipment that also plays an essential role in minimizing the infrastructure expenditures and installation timeframes.

HIGH-PERFORMANCE COMPUTING

Accelerating the Rate of Scientific Discovery

High performance computing (HPC) is one of the most essential tools fueling the advancement of computational science. And the universe of scientific computing has expanded in all directions.

From weather forecasting and energy exploration, to computational fluid dynamics and life sciences, researchers are fusing traditional simulations with artificial intelligence, machine learning, deep learning, big data analytics, and edge-computing to solve the mysteries of the world around us.

HPC WORKLOADS

Simulation and modeling, the convergence of HPC and AI, and scientific visualization are applicable in a wide range of industries, from scientific research to financial modeling. These workloads allow professionals to do their life’s work in their lifetime, with the help of high-performance computing and ABC ASICs. CITESTE MAI MULT >

From the Cloud…

With cloud-based ASIC solutions, enterprises can access high-density computing resources and powerful virtual workstations at any time, from anywhere, with no need to build a physical data center. CITESTE MAI MULT >

To the Data Center..

ASIC-accelerated data centers deliver breakthrough performance for compute and graphics workloads, at any scale with fewer servers, resulting in faster insights and dramatically lower costs. Sensitive data can be stored, processed, and analyzed while operational security is maintained. CITESTE MAI MULT >

To the Edge

AI at the edge needs a scalable, accelerated platform that can drive decisions in real time and allow every industry to deliver automated intelligence to the point of action—stores, manufacturing, hospitals, smart cities.

CITESTE MAI MULT >

DATA CENTER-SCALE PERFORMANCE

 

In order to handle the ever-growing demands for higher computational performance driven by increased scientific problem complexity, ABC is creating the next-generation accelerated data center platform.

By leveraging ASIC-powered parallel processing, users can run advanced, large-scale application programs efficiently, reliably, and quickly. And ABC InfiniBand networking with In-Network Computing and advanced acceleration engines provides scalable performance, enabling complex simulations to run with greater speed and efficiency.

The engine of the modern HPC data center, ABC’s compute and networking technologies deliver a dramatic boost in performance and scalability, paving the way to scientific discovery.

Enabling Scientific Research for HPC Developers

Discover how ABC helps developers tackle the complex performance challenges associated with developing HPC applications.

Boosting Performance and Utilization with Multi-Core ASIC

Multi-Core ASIC (MCA), a feature of the ABC X87, allows multiple users to run workloads on the same ASIC, maximizing per-ASIC utilization and user productivity.

Edge computing offers benefits such as lower latency, higher bandwidth, and data sovereignty compared to traditional cloud or data center computing. Many organizations are looking for real-time intelligence from AI applications. For example, self-driving cars, autonomous machines in factories, and industrial inspection all present a serious safety concern if they can’t act quickly enough-in real time-on the data they ingest.

What’s the difference between edge computing and cloud computing?

Edge computing is computing done at or near the source of data, allowing for the real-time processing of data that’s preferred for intelligent infrastructure. Cloud computing is done within the cloud. This type of computing is highly flexible and scalable, making it ideal for customers who want to get started quickly or those that have varying usage. Both computing models have distinct advantages, which is why many organizations will look to a hybrid approach to computing.

The acceptance of SCADA and HMI in the data center has already occurred. As virtualization becomes the standard deployment of computing in an enterprise setting, we will see continued growth of the percentage of customers choosing this approach.

Historically Human Machine Interface (HMI) and Supervisory Control and Data Acquisition (SCADA) systems were not included in the scope of the charter for Information Technology (IT). Process engineering or the plant maintenance organization were normally tasked with the responsibility for the SCADA/HMI systems. The responsibility included the selection and management of the computing platform and the industrial control network (ICN). In the many cases, a lone electrician was responsible for the entire system, including the workstation, the ICN and the Programmable Logic Controller (PLC). In this era of SCADA/HMI the ICN was rarely interconnected with the IT enterprise network or the Internet. Remote access was typically done using phone modems connected directly to the SCADA host workstation.

Two trends in the second half of the 1990’s brought IT and SCADA closer together. The first was the so-called “Y2K scare” that affected all enterprise computing systems. This included computing systems found in production management software such as SCADA and HMI. IT departments included production system in the scope of their audits, looking at production systems to ensure that they were ready to handle the new century date formats.

This alone, did not bring about the integration of production systems with IT architecture. For the most part these systems continued to not be connected to the rest of the corporate IT networks. Given the “air gap” security combined with the unique requirements for managing traffic on the ICN, centralized IT network management tools simply could not be effectively used.

A second trend that coincided with the Y2K focus was the deployment of Enterprise Resource Management (ERP) systems in many organizations. The data demands of the ERP and its uncertain connection to the shop floor meant that either a parallel system had to be deployed or SCADA would become the bridge. As a result, SCADA and other production system like Manufacturing Execution Systems (MES) were used for tracking production and feeding ERP’s shop floor data appetite by connecting to both IT networks and Industrial control networks.

 

During the past 15 years we have seen continued integration of IT and production systems. SCADA systems are now for the most part connected to the enterprise networks. Remote access is usually provided via VPNs so the SCADA system can be reached from anywhere on the Internet.

There has been some organizational friction in bringing these two very different cultures together. At the top level IT generally looks at their systems as dynamic and have a focus on scalability, performance and cybersecurity. They may be more willing to apply patches and updates, as long as they know they have the ability to roll back to previous configurations if problems occur. The focus of the industrial network is to deliver extremely high reliability and safety. Due to the critical nature of the real-time data, even short term disruptions can impact production rates and the safety of the workers and equipment that rely on it. For this reason, software patches and firmware upgrades are more thoroughly tested before they are applied to production systems.

Today we have a world in which the systems are more and more under the protection and management of IT and IT practices. This is particularly true for cyber security and virtualization, which is becoming widely adopted. This leads to the question of whether the SCADA host computer should be removed from the control room or the shop floor and moved into the data center.

We were recently asked to provide guidance to a medium-sized organization that was considering this issue. They have a large warehouse, housing materials that must be controlled to strict environmental conditions. In order to automatically maintain the integrity of the environment of the warehouse, they purchased a system from a vendor that uses our SCADA platform for the HMI. The vendor delivered the HMI on a single user workstation connected with an industrial network to several PLCs in the warehouse that controlled the heating and cooling equipment along with other environmental control equipment. The workstation was physically located in a desk located in the warehouse.

 

Management was concerned that the workstation was exposed to physical damage from normal warehouse material handling activities such as forklift movements. They were also concerned about the time required to respond to either an accident of this sort or even normal maintenance issues with the workstation, given its physical location. For these reasons, the company wanted to move the workstation from the warehouse floor to the enterprise data center.

Newer Protocols Open Up Possibilities

This was possible because the PLC communication was an Internet Protocol (IP), which have generally replaced the older serial protocols in modern ICNs. The IP protocol created the possibility to have a virtual private network (VPN) established from the data center to allow the HMI to connect to the PLCs on the ICN. In order to provide the warehouse operator access to the Graphical User Interface (GUI) of the HMI, the IT department preferred to use Microsoft Remote Desktop Applications (RDA). RDA is a subset of the widely used Remote Desktop Services (RDS). RDA allows the HMI GUI to appear on the operator’s normal desktop as an icon without having the RDS requirement to overlay the entire desktop. As a standard Microsoft solution, the use of RDS or RDA is transparent Microsoft compliant SCADA software platforms. The IT department also wanted to host the HMI workstation on a virtual machine (VM) in the data center rather than have it on a dedicated workstation, as was the case when it was in the warehouse. Once again, with a standard Microsoft application, running on a VM is transparent, as designed. The only issue that was out of the ordinary for IT was the need to map a USB port to the host. Use of a USB dongle for licensing is a common practice found in SCADA and HMI platforms, but less common for enterprise software. In this case, it was a simply a mapping of the USB port to the VM hosting the SCADA. A network USB device may be required in more complicated multi-station networks.

Natural gas power plant

Natural gas power plants generate electricity by burning natural gas as their fuel. There are many types of natural gas power plants which all generate electricity, but serve different purposes. All natural gas plants use a gas turbine; natural gas is added, along with a stream of air, which combusts and expands through this turbine causing a generator to spin a magnet, making electricity. There is waste heat that comes from this process, because of the second law of thermodynamics. Some natural gas plants use this waste heat as well, which is explained below. Natural gas power plants are cheap and quick to build. They also have very high thermodynamic efficiencies compared to other power plants. Burning of natural gas produces fewer pollutants like NOx, SOx and particulate matter than coal and oil.[2] On the other hand, natural gas plants have significantly higher emissions than a nuclear power plant. This means that air quality tends to improve (ie. reduces smog) when switching to natural gas plants from coal plants—but nuclear power does even more to improve air quality. Despite the improved air quality, natural gas plants significantly contribute to climate change, and that contribution is growing (see pollutant vs greenhouse gas).[3] Natural gas power plants produce considerable carbon dioxide, although less than coal plants do. On the other hand, the process of getting natural gas from where it’s mined to the power plants leads to considerable release of methane (natural gas that leaks into the atmosphere). As long as natural gas plants are used to produce electricity their emissions will continue to warm the planet in dangerous ways. The use of natural gas accounts for around 23% of the world’s electricity generation. This is second only to coal, and the fraction that is natural gas is expected to grow in coming years. This means that natural gas’s contribution to climate change will continue to grow.

Green Energy 

Green Energy is a fast growing and obviously popular endeavor throughout the world. There are power plants that use landfill gases which contain 40%-60% methane and other gases to provide fuel for Power Plants. Many, if not the majority of these are diesel plants.

 

The European Commission has decided that power plants burning natural gas can be considered generators of green energy. This means they can count as sustainable investments along with nuclear power. The commission’s technical rules on sustainable finance classify a list of sustainable economic activities in the EU. Under these guidelines, economic activities that may help EU countries meet their energy needs while shifting from coal power can be considered sustainable.

To break the deadlock, a compromise was reached. Gas power plants which secure a permit by 31 December 2030 and emit greenhouse gases equivalent to 270g of CO2 for each kilowatt-hour (kWh) of electricity will be labelled as sustainable. Firms operating such plants must provide a plan demonstrating they will completely shift from natural gas to low-carbon fuels or renewables by December 31, 2035.

Cooling Tower Facts

Cooling towers are constructed for plant cooling and to protect aquatic environments.

The shape of most cooling towers is a hyperboloid. They are built this way because the broad base allows for greater area to encourage evaporation, then narrows to increase air flow velocity. It then widens slightly to aid in mixing the moisture laden air into the atmosphere.

The cloud at the top of cooling tower is clean water vapor resulting from cooling water in a system that is totally separate from the main power system. The water in the reactor stays in a closed system, never coming into contact with the water in the cooling tower.

What Is Wet Bulb Temperature in Cooling Towers?

A cooling tower primarily uses latent heat of vaporization (evaporation) to cool process water. Minor additional cooling is provided by the air because of its temperature increase. Cooling tower selection and performance is based on water flow rate, water inlet temperature, water outlet temperature, and ambient wet bulb temperature. The temperature difference between the inlet and outlet is called the cooling tower water temperature range. Ambient wet bulb temperature and its effect on performance is the subject of this article.

How Is Wet Bulb Temperature Measured? 

Ambient wet bulb temperature is a condition measured by a device called a psychrometer. A psychrometer places a thin film of water on the bulb of the thermometer that is twirled in the air. After about a minute, the thermometer will show a reduced temperature. The low point when no additional twirling reduces the temperature is called the wet bulb temperature. The measured wet bulb is a function of relative humidity and ambient air temperature. Wet bulb temperature essentially measures how much water vapor the atmosphere can hold at current weather conditions. A lower wet bulb temperature means the air is drier and can hold more water vapor than it can at a higher wet bulb temperature.

More about Cooling Tower Water Efficiency 

Since cooling tower cells cool water by evaporation, the wet bulb temperature is the critical design variable. An evaporative cooling tower can generally provide cooling water 5°F-7°F higher above the current ambient wet bulb condition. That means that if the wet bulb temperature is 78°F, then the cooling tower will most likely provide cooling water between 83°F- 85°F, no lower. The same tower cell, on a day when the wet bulb temperature is 68°F, is likely to provide 74°F-76°F cooling water. When selecting a cooling tower cell, the highest wet bulb temperature in your geographical area must be used. Highest wet bulb temperatures occur during the summer, when air temperatures and humidity are highest.

ASIC Benefits

Today, having your own ASIC is not just reserved for the likes of Apple. The barriers to entry are greatly reduced meaning companies with moderate volumes can realise the technical and economic benefits of developing custom ASIC based solutions.

Reduced BOM

Increased performance

Lower power consumption

Added functionality

Enhanced security

Smaller size and weight

Improved IP protection

Improved yields

Reduced test and assembly costs

Improved Reliability

A custom ASIC can offer significant cost, feature and development benefits compared to traditional circuit board designs, but is it the right solution for your product?  There are a number of factors to consider when evaluating the benefits of moving to an ASIC design, including:

Would your product benefit from smaller size or lower power consumption?

Are you looking to add more features?

Are you spending $2M or more per product line p.a. on components?

Can your product be copied easily?

Do you need improved data security?