Categories
Big Data edge computing IoT ReadWrite

Why the Edge is Key to Unlocking IoT’s Full Potential

edge unlocking IoT

To IoT’s great benefit, edge computing is about to take the spotlight. Consider that each day billions of devices connected to the Internet of Things come online. As they do, they generate mountains of information. One estimate predicts the amount of data will soar to 79.4 zettabyes within five years. Imagine storing 80 zettabytes on DVDs. All those DVDs would circle the Earth more than 100 times.

In other words, a whole lot of data.

Indeed, thanks to the IoT, a dramatic shift is underway. More enterprise-generated data is being created and processed outside of traditional, centralized data centers and clouds. And unless we make a course correction, the forecasts could come unglued. We must make better use of edge computing to deal more effectively with this ocean of data,

Network Latency

If we do this right, our infrastructure should be able to handle this data flow in a way that maximizes efficiency and security. The system would let organizations benefit from instantaneous response times. It would allow them to use the new data at their disposal to make smarter decisions and — most importantly — make them in real-time.

That’s not what we have nowadays.

In fact, when IoT devices ship their data back to the cloud for processing, transmissions are both slow and expensive. Too few devices are taking advantage of the edge.

Traffic Jam: The Cloud

Instead, many route data to the cloud. In that case, you’re going to encounter network latency measuring around 25 milliseconds. And that’s in best-case scenarios. Often, the lag time is a lot worse.  If you have to feed data through a server network and the cloud to get anything done, that’s going to take a long time and a ton of bandwidth.

An IP network can’t guarantee delivery in any particular time frame. Minutes might pass before you realize that something has gone wrong. At that point, you’re at the mercy of the system.

Data Hoarding 

Until now, technologists have approached Big Data from the perspective that the collection and storage of tons of it is a good thing. No surprise, given how the cloud computing model is very oriented toward large data sets.

The default behavior is to want to keep all that data. But think about how you collect and store all that information. There is simply too much data to push it all around the cloud. So why not work at the edge instead?

Cameras Drive Tons of Data – Not All of Which We Need

Consider, for example, what happens to the imagery collected by the millions of cameras in public and private. What happens once that data winds up in transit? In many – and perhaps most – instances, we don’t need to store those images in the cloud.

Let’s say that you measure ambient temperature settings that produce a reading once a second. The temperature reading in a house or office doesn’t usually change on a second-by-second basis. So why keep it?  And why spend all the money to move it somewhere else?

Obviously, there are cases where it will be practical and valuable to store massive amounts of data. A manufacturer might want to retain all the data it collects to tune plant processes. But in the majority of instances where organizations collect tons of data, they actually need very little of it. And that’s where the edge comes in handy.

Use the Edge to Avoid Costly Cloud Bills

The edge also can save you tons of money. We used to work with a company that collected consumption data for power management sites and office buildings. They kept all that data in the cloud. That worked well until they got a bill for hundreds of thousands of dollars from Amazon.

Edge computing and the broader concept of distributed architecture offers a far better solution.

Edge Helps IoT Flourish in the era of Big Data

Some people treat the edge as if it were a foreign, mystical environment. It’s not.

Think of the edge as a commodity compute resource. Better yet, it is located relatively close to the IoT and its devices. Its usefulness is precisely due to its being a “commodity� resource rather than some specialized compute resource. That most likely takes the form of a resource that supports containerized applications. These hide the specific details of the edge environment.

The Edge Environment and Its Benefits

In that sort of edge environment, we can easily imagine a distributed systems architecture where some parts of the system are deployed to the edge. At the edge, they can provide real-time, local data analysis.

Systems architects can dynamically decide which components of the system should run at the edge. Other components would remain deployed in regional or centralized processing locations. By configuring the system dynamically, the system is optimized for execution in edge environments with different topologies.

With this kind of edge environment, we can expect lower latencies. We also achieve better security and privacy with local processing.

Some of this is already getting done now on a one-off basis. But it hasn’t yet been systematized. That means organizations must figure this out on their own by assuming the role of a systems integrator. Instead, they must embrace the edge and help make IoT hum.

The post Why the Edge is Key to Unlocking IoT’s Full Potential appeared first on ReadWrite.

Categories
ReadWrite

Why it’s Time to Move to an Event Driven Architecture

Event Driven Architecture

Real-time and IoT have modernized application development. But “the laws of physics still apply.â€� As a guest speaker early in my career, I’d tell audiences that the fundamental insights they gained from their traditional application development experiences still apply to modern application development. Here is why it’s time to move to an event-driven architecture.

Development experiences teach valuable lessons.

Some 25 years since I first gave that presentation, I still believe that development experience teaches valuable lessons. For instance, we should know that databases don’t run any faster in an application for the Internet of Things (IoT) than they run in the typical customer service application built using traditional methods

Yet I still see too many instances where IoT developers ignore the limits of traditional databases. These databases cannot handle the enormous demands required for analyzing massive amounts of data. So developers instead wind up trying to build applications that require thousands of updates a second. They should know from the get-go that it’s not going to work.

In the IoT world, solutions depend on streaming data.

Solutions depend on streaming data. But most application developers still do not have a good grasp of the best way to process that data. They usually go with: “I get some data. I stick it in the database and then I go run queries.�

The process of sticking the data in the database and running queries works when you’re building traditional applications for transaction processing or business intelligence. The database usage requires moderate data rates and no need for real-time responses.

But that’s not going to work when you have massive streams of data coming in each second that need immediate analysis.

For instance, ask a developer about the speed of their database and they may tell you it can do 5,000 updates a second. So why then are they trying to build an IoT application that must perform 50,000 updates a second? It won’t work. They should already know that from experience.

Let’s step back for a moment to understand why this happens.

Real-Time Applications and the Database

For decades, databases have been used to store information. Once the data was there, you could always return at your convenience and query the database further to determine what was of interest.

But with the advent of real-time systems, databases are an albatross. The entire point of real-time systems is to analyze and react to an event in the moment. If you can’t analyze the data in real-time, you’re severely constrained — particularly with security or safety applications.

Most application developers are more accustomed to situations where they input data into a database and then run their queries. But the input/run model doesn’t work when the applications stream tons of data per second that require an immediate response.

A further challenge: How to display real-time data in some sort of a dashboard.

As a standard, one runs queries against the database to get the data. You kill resources when you try to display real-time information with lots of data running big queries every second.

Except for a handful of specialists steeped in this technology, most of us aren’t prepared to handle high volumes of streaming data.

Consider a sensor tracking ambient temperatures that are generating a new reading once every second. Ambient temperatures don’t change that rapidly, so a few sensors may be manageable. Now imagine the massive amount of data generated by 10,000 sensors spitting out information simultaneously.

Similarly, consider the example of a power company gathering billions of data points that get fed directly into a database. It’s just not possible to dump all of that data into a system at one time and expect to process everything instantly. You can’t update a database 100,000 times a second.

The system isn’t cost-effective or efficient to throw all this data into a database at once and then do nothing for a day until the next batch arrives.

Imagine the hardware you’d need to handle the spike. The situation begs for trouble. In fact, most developers haven’t ever built these kinds of applications before. And when they do try, they’re likely going to encounter errors or get frustrated by slow speeds.

The spike and the system requires finding ways to process the data in memory rather than trying to do it all in the database

New Times, New Development Model

Looking at the spike and the hardware system will explain why we’re still struggling to put in place a workable, scalable architecture that can support the promise of IoT.

Think about the challenges that municipalities encounter trying to manage “smart roads.â€� If you’re going to avoid accidents, you need data instantaneously. But when data stream transmissions that measure traffic are slow arriving in central headquarters, that’s a big roadblock (pardon the pun).

What about systems based on event-driven architecture?

With the adoption of systems based on an event-driven architecture (EDA), that future need not happen. While EDA is relatively new, many industries already use this approach.

It’s common on assembly lines or in financial transactions, whose operations would suffer from delays getting crucial data for decision making.

Until now, the software development model has relied on storing large volumes of information into databases for subsequent processing and analysis. But with EDA apps, systems analyze data as events occur in a distributed event mesh.

The crucial data delivered.

In these scenarios, the processing and analyzing of data now gets down closer to — or even on — the sensors and devices that actually generate the data.

High volume data must be analyzed in memory to achieve the rapid response times required. The upshot: the development of applications that act in real-time and respond to tens of thousands — or even millions — of events per second when required.

Instead of relying upon traditional database-centric techniques, we must apply an event-driven architecture.

When we apply event-driven architecture — data can be analyzed by real-time systems. And we can process high-volume event streams more efficiently and faster than traditional databases do.

The contours of the future have rarely been any clearer about where technology is heading.

The post Why it’s Time to Move to an Event Driven Architecture appeared first on ReadWrite.