Firms Will Look To Edge Computing To Move Beyond POC Benefits

FogHorn CTO Sastry Malladi, VP of Product Management Ramya Ravichandar and VP of Software Engineering Senthil Kumar reveal their predictions for cloud edge computing hybrid strategies, autonomous operations, edge AI-enabled sustainable solutions.

Organizations will move IoT projects from proof-of-concept to proof-of-value deployments

Get The Full Seth Klarman Series in PDF

Get the entire 10-part series on Seth Klarman in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues.

Q3 2019 hedge fund letters, conferences and more

Ramya Ravichandar: IDC anticipates there will be over 41 billion connected Internet of Things (IoT) devices, generating over 79 zettabytes of data, by 2025. This trend will be driven by the expanding quantity and variety of streaming data channels, moving beyond audio, image, and video sensors to also include acoustic, acceleration, vibration, and others. Training data for artificial intelligence use cases and machine learning model creation also play a significant role here.

Michael Zimmerman’s Prentice Capital is having a strong year

business manPrentice Capital was up 15.3% net last month, bringing its year-to-date gain to 49.4% net. Prentice touted its ability to preserve capital during market downturns like the first quarter of this year and the fourth quarter of 2018. Q3 2020 hedge fund letters, conferences and more Background of Prentice Capital The fund utilizes a low Read More

During proof-of-concept (POC) deployments in the last few years, many organizations have confirmed the benefits that IoT can bring to a wide variety of industries - and IoT spending is expected to reach $1.1 trillion by 2025, according to IDC. For example, smart city initiatives are replacing existing equipment with IoT-enabled, embedded sensors that capture a wide range of data. Armed with IoT data, cities can improve public safety and security, energy efficiency, traffic management and respond to ever-changing environmental and weather conditions.

However, one of the main challenges in the industry today is figuring out how to capture, organize, process, and deploy large amounts of complex data more effectively. Although organizations are able to implement IoT solutions and gather data, there are still a few roadblocks impacting wide-spread adoption, including skill gaps and cloud costs. Indeed, only 26% of organizations that implemented IoT felt their project was successful.

In 2020 and beyond, we will see organizations move IoT and IIoT projects from proof-of-concept to full deployments with the goal to increase overall operational efficiency. To move beyond initial POC benefits, organizations will focus on innovative new opportunities, such as edge computing, to drive significant ROI, deliver enhanced operational productivity, and achieve the final proof-of-value phase.

In addition to data quantity, organizations must improve data quality to drive actionable insights

Ramya Ravichandar: While many see connectivity limitations, security risks, and data bias issues, including data quantity, as roadblocks to IoT success, data quality also plays a critical role in delivering effective IoT projects. Organizations can only make the right data-driven decisions if the data used is correct and suitable for the use case at hand. Edge computing plays an essential role in evaluating and delivering heightened data quality, as edge-enabled solutions can perform real-time analysis of disparate data streams and identify only the most valuable insights for further processing and AI training.

Looking ahead, data processing and enrichment at the edge will contribute to IoT success by identifying and addressing false and inaccurate machine learning models that lead to dangerous machine failures, declining operational productivity, and significant cost issues.

Edge-enabled solutions will power a more sustainable future

Ramya Ravichandar: In 2020, we will see an increase in edge computing deployments driving green tech use cases to minimize carbon footprint. Transport organizations will start deploying edge computing to detect abnormal regen and idling events in real-time to save billions of pounds of CO2 emissions per year. Additionally, oil and gas organizations will deploy edge technologies to monitor flare stack health to understand emissions output. Through sensor fusion technology, edge solutions will help identify issues with compressor health and alert operators about potential regulatory violations. Also, steel manufacturers will look to edge computing to save millions of tons of CO2 emissions by identifying defective parts produced in steel manufacturing as early as possible in the process to reduce scrap and increase yield.

For these organizations, edge solution will deploy real-time measurement data and machine learning models to determine product quality and directly impact sustainability initiatives.

The industry will refine the definition of “edge”

Sastry Malladi: This year many industry players led conversations regarding the exact definition and various locations of the edge.  Organizations have struggled to understand the precise location of the edge when, in reality, the location is highly dynamic and varies by industry and use case. For example, telecom operators consider the edge of the telecom network the true edge (also called the service edge), whereas application developers and industrial plant operators define it as the point of data production (or the location of the asset being monitored). The telco definition of the edge also aligns with MEC (Multi-access Edge Computing).

Moreover, some solutions adopted edge terminology without considering its exact characteristics, thus introducing more confusion to the market. Weak (or fake) edge solutions lack the ability to optimally run analytics and machine learning models on the live streaming data in a constrained compute environment, a crucial requirement for deriving actionable insights in real-time. These solutions are not ‘true edge’ as they rely on the cloud for data processing, rather than processing data at the edge.

Lastly, confusion regarding the edge-cloud relationship. Edge is certainly complementary to cloud, although in the industrial sector, edge greatly enhances the cloud adoption and value. Indeed, over the next year, edge computing leaders will continuously work to evolve and refine answers to questions such as: where is the edge located, what is edge computing; and why is the edge important.

Automotive manufacturers will look to edge computing to improve real-time functionalities and accelerate autonomous operations

Sastry Malladi: Cars generate significantly more data today than ever before, and it is a big challenge to gather, merge, process, and deploy all that sensor data efficiently. The future of transportation with autonomous vehicles (AV) depends on creating the required intelligence and processing to build and operate sophisticated, autonomous systems. For example, many AVs are expected to be electric cars, and these will require substantially more in-vehicle intelligence and system life cycle management. These are needed to maximize the efficiency and lifespan of battery and charging systems, as well as other systems supporting braking, motor performance, safety, passenger environment, and predictive maintenance.

While fully autonomous vehicle controls are years away, there are many existing edge computing applications now available to enhance the efficiency, reliability, and safety of commercial and public transportation. These include vehicle control and safety systems, such as cameras, driver assistance, and collision avoidance functions, that are being added to new vehicles every year.

In the year ahead, rather than relying on remote data centers for critical command and control decisions, automotive manufacturers can eliminate safety concerns and fast-track the road to autonomous driving by deploying edge-enabled systems.

Organizations will experience a shift from cloud only to cloud-edge hybrid strategies to enable Edge AI and iterative ML modeling and ongoing improvement of outcomes

Senthil Kumar: Being able to analyze high-fidelity, high-resolution, raw machine data in the cloud is often expensive and does not happen in real-time due to transport and ecosystem considerations. Organizations often depend on down-sampled or time deferred data to avoid significant cost constraints, and as a result, organizations miss critical insights as they’re only looking at incomplete datasets.

Instead, by implementing edge-first solutions, organizations can synthesize data locally, identify machine learning inferences on core raw data sets, and deliver enhanced predictive capabilities (versus cloud-heavy, expensive, retroactive insights). By running ‘edgified’ versions of ML models in real-time, organizations enable faster responses to real-time events and the ability to act, react, pro-act to events of interest at the source. This ensures a harmonious interplay of edge and cloud, leveraging the strengths of each ecosystem.

Indeed, in the next few years, more than 40% of organizations' cloud deployments will include edge computing to address bandwidth bottlenecks, reduce latency, and process data for mission-critical decision support in real-time. These edge-powered, IIoT projects will extract a realistic view of daily machine operations and work towards a new level of predictability that will dramatically alter the industry landscape as we know it. In short, in 2020, cloud-dominated solutions will adopt a more edge-first, or cloud-edge hybrid, approach to drive significant business value.

Organizations will look beyond edge computing to edge AI solutions to deliver optimal ROI

Senthil Kumar: When organizations build ML models, an assumption is made that the model will be accurate for a certain period of time, as the model has been trained on a particular set of data. If new data patterns emerge or if the model has not been trained on all possible data sets or workflows, the model might not continue to provide accurate results. By employing edge AI, the models can be continuously updated with new, meaningful data and the learning sets updated.

For example, in a factory, a model can be deployed to detect defects on a part inspection assembly line or proactively identify patterns that may lead to defects after a period of time. Often, after a few months, the model’s accuracy may diminish due to new data patterns. This can be misleading, and the opportunity cost can be significant if the software uses traditional analytics exclusively.

Using the power of artificial intelligence (AI) at the edge and self-learning models, in 2020, ML models can move beyond traditional analytics capabilities and significantly improve predictive functionality and overall ROI. With edge AI, software can proactively interface with live data streams and cater to intelligence at or near the source, leading to increased overall productivity, efficiency, and cost-savings.