The heart of a power plant is the spinning equipment (a turbine connected to a generator) that converts kinetic energy into electrical energy. Steam turbines generate most of the world’s bulk electricity. Large power plants operate multiple turbines, putting thousands of megawatts on the grid, and each turbine requires additional equipment to pump water, move air and steam, and possibly move and pulverize coal. Lastly, they all must work in concert to create power. Like the weakest link in a chain analogy, if one of these critical spinning machines fails, the whole unit could go offline.
The importance of uptime
Keeping this equipment running can mean the difference between hitting profit numbers or getting fined, incurring expense overruns, and losing money for the quarter. This is what is meant by “uptime,” and it’s something most of us are familiar with. We don’t want to be stuck on the side of the road, so we get the oil changed in our car at mile/kilometer intervals. We rotate our tires and keep them inflated to keep us safe on the road, and we change the radiator fluid so the car doesn’t overheat. Car manufacturers use technology to help us do this. Most of our cars tell us when it’s time for an oil change, when one of our tires is running on low pressure, and when our coolant needs a refill. New sensors, more processing power in a car, and better measurement technology mean more information for us—the operators—so we can keep it running.
The same principle applies to power plants. The pumps and motors that have helped generate power for decades are now being outfitted with sensors for vibration and temperature and being connected to intelligent edge nodes that screen data where the asset connects to the network. That data, calledBig Analog Data, is being screened for information on equipment health. This type of data is derived from the physical world using mostly analog sensors as opposed to deriving it from IT data sources or social/human sources. And it’s big. A single pump motor setup can generate over 20 GB of data per day. Over an entire plant with 80 assets, that equals more than 1.6 TB of data per day! This data contains insights that can help power-generation utilities make better decisions about equipment maintenance. And Internet of Things (IoT) technologies, like edge computing, analytics, and software platforms, are helping power-generation utilities mine these terabytes of Big Analog Data to improve uptime and reduce maintenance cost.
It all starts at the edge
Most of the data collected at the asset is mundane, but without constant measurement and processing, important information could be missed. Intelligent embedded software screens all sensor inputs for triggers set by the maintenance team or automatically by baseline data. Data deemed worthy is sent to a server. The now server-based data is available to human experts anywhere in the world working with software technology for pattern recognition, running analytics, conductingmachine learning, and, probably soon, wearingAR hardwareon-site.
Profiting from Big Analog Data
The return on investment for a complete system can come from a variety of sources. As little as one prefailure find on a critical piece of equipment can pay for a system in preserved operational time or avoided penalties. Servicing equipment before it fails can mean fewer parts to replace and less costly repairs. Finally, helping maintenance experts manage more assets from anywhere in the world, with fewer truck rolls, improves worker efficiency. It ends in profit, but it starts with Big Analog Data.
This blog originally appeared on the PTC Liveworx blog. To learn more about the latest Industrial IoT applications,registerfor LiveWorx 18, June 17-20, in Boston.