In this edition of Voices of the Industry, Jason Carolan, Chief Innovation Officer at Flexential, explores the importance of the edge when it comes to real-time data processing.
IT innovations and the Internet of Things (IoT) are fueling data creation. Today, there are approximately 10 billion IoT devices in the world, and that number is expected to hit 75 billion by 2025. With so many connected devices, it is no wonder the amount of data created each day continues to spike upward. Recent estimates predict IoT-generated data will reach 73.1 zettabytes (ZB) by 2025—a 422% increase since 2019.
Integrating this wealth of data with artificial intelligence (AI), machine learning and powerful analytics offers seemingly endless opportunities to garner valuable insights, make data-driven decisions and produce results.
Healthcare has already widely adopted IoT technology to improve the quality of care and derive better patient outcomes. The consumer market is following suit as well, representing 60% of all IoT devices in 2020.
These numbers are not surprising when you consider how integrated IoT devices are in simple, everyday activities and events. A simple walk, run or ride is frequently tracked by at least one device.
This past month, Flexential was a title sponsor for Ride the Rockies, a six-day, 418-mile cycling experience through the Rocky Mountains. This event highlighted how IoT data generation, real-time data sharing and the edge work together to deliver a safer, improved cycling experience.
During the event, most cyclists utilized one, if not more, connected devices to track the route and check vital statistics, including heart rate, hydration levels and core body temperature in real time throughout the ride.
Many participants also used Strava, a mobile application that utilizes GPS and cellular networks to create a platform for cyclists to generate and consume real-time data. During Ride the Rockies, cyclists were able to share information about their rides, assess cycling routes and connect with other riders. The real-time data also enabled heat-mapping capabilities to detail the most active paths at any given time, which can help route planners down the line.
To help businesses, municipalities, medical providers and individuals create change and improve results, the data derived from connected devices increasingly requires near real-time delivery.
- Sensors on traffic lights and vehicles help speed emergency response time by providing line-of-sight plus long-range insights to adjust traffic signals to create a safe, high-speed right of way for emergency vehicles.
- Cities like New York City are outfitting cyclists with sensors to continuously measure air quality in the city.
- Data from sensors along roadways and on traffic lights help cities assess traffic patterns to improve traffic flow and plan safer school routes for cyclists and pedestrians.
- Applications like Waze allow individuals to monitor roadway conditions in real-time to avoid accidents and reduce commute time, with crowd-sourced data.
The list goes on, but you get the point.
When Latency Matters
Yet, not all data warrants the same delivery speed. The demand for real-time data—whether as a life-saving application or to enrich an experience—has altered the conversation around acceptable latency. Typical latency of 100 milliseconds (ms) may be acceptable to access passive data, such as sales numbers, employee records and online photo albums. However, active data, such as real-time heart health metrics or glucose levels delivered via medical devices; streaming video for live video conferencing, online synchronous learning and live exercise classes; and sensor-enabled alarm systems, needs to be delivered in real time to ensure the accuracy of the data. As we look to a future of autonomous cars and virtual reality, the criticality of low-latency data delivery will only continue to tick upward.
This dynamic data cannot tolerate the delivery lag associated with sending data to the core data center—which may be across the country—and waiting for it to return after it has been processed and correlated. This data demands latency in the sub 10ms range. The reality is, in these situations, the difference between 100ms and 10ms can have significant life-impacting outcomes, like delaying an emergency response or a critical communication with a self-driving vehicle. Data and networks must work in conjunction with local processing to provide the best, safest result.
This is where regional and edge data centers and a world-class network is essential to enable real-time data processing and decision-making. Edge deployments are designed to support these ultra low-latency requirements. By placing data closer to where it is created and consumed, the edge data center eliminates the need to send data back to the centralized data center. Instead, data can be quickly processed at the edge to minimize latency and speed results.
Flexential understands this and offers a portfolio of core and edge data centers—connected by its reliable connectivity fabric and scalable 100 gigabyte+ (GB) network backbone—to support even the most latency sensitive applications as well as increased data security and compliance needs.
As opportunities and demand for real-time data continue to proliferate business and consumer markets, the need for edge deployments with dynamic connectivity options will continue to underscore delivery speed. One thing is for sure: Data generation is not subsiding, and the demand for dynamic data and real-time capabilities will continue to rise in lockstep with ongoing IT innovations.
Jason Carolan leads a team focused on defining, assessing and providing direction on the changing technology landscape facing Flexential’s business and its customers. See how Flexential goes beyond the four walls of the data center to empower IT through this interactive map.