Categories
Big Data edge computing IoT ReadWrite

Why the Edge is Key to Unlocking IoT’s Full Potential

edge unlocking IoT

To IoT’s great benefit, edge computing is about to take the spotlight. Consider that each day billions of devices connected to the Internet of Things come online. As they do, they generate mountains of information. One estimate predicts the amount of data will soar to 79.4 zettabyes within five years. Imagine storing 80 zettabytes on DVDs. All those DVDs would circle the Earth more than 100 times.

In other words, a whole lot of data.

Indeed, thanks to the IoT, a dramatic shift is underway. More enterprise-generated data is being created and processed outside of traditional, centralized data centers and clouds. And unless we make a course correction, the forecasts could come unglued. We must make better use of edge computing to deal more effectively with this ocean of data,

Network Latency

If we do this right, our infrastructure should be able to handle this data flow in a way that maximizes efficiency and security. The system would let organizations benefit from instantaneous response times. It would allow them to use the new data at their disposal to make smarter decisions and — most importantly — make them in real-time.

That’s not what we have nowadays.

In fact, when IoT devices ship their data back to the cloud for processing, transmissions are both slow and expensive. Too few devices are taking advantage of the edge.

Traffic Jam: The Cloud

Instead, many route data to the cloud. In that case, you’re going to encounter network latency measuring around 25 milliseconds. And that’s in best-case scenarios. Often, the lag time is a lot worse.  If you have to feed data through a server network and the cloud to get anything done, that’s going to take a long time and a ton of bandwidth.

An IP network can’t guarantee delivery in any particular time frame. Minutes might pass before you realize that something has gone wrong. At that point, you’re at the mercy of the system.

Data Hoarding 

Until now, technologists have approached Big Data from the perspective that the collection and storage of tons of it is a good thing. No surprise, given how the cloud computing model is very oriented toward large data sets.

The default behavior is to want to keep all that data. But think about how you collect and store all that information. There is simply too much data to push it all around the cloud. So why not work at the edge instead?

Cameras Drive Tons of Data – Not All of Which We Need

Consider, for example, what happens to the imagery collected by the millions of cameras in public and private. What happens once that data winds up in transit? In many – and perhaps most – instances, we don’t need to store those images in the cloud.

Let’s say that you measure ambient temperature settings that produce a reading once a second. The temperature reading in a house or office doesn’t usually change on a second-by-second basis. So why keep it?  And why spend all the money to move it somewhere else?

Obviously, there are cases where it will be practical and valuable to store massive amounts of data. A manufacturer might want to retain all the data it collects to tune plant processes. But in the majority of instances where organizations collect tons of data, they actually need very little of it. And that’s where the edge comes in handy.

Use the Edge to Avoid Costly Cloud Bills

The edge also can save you tons of money. We used to work with a company that collected consumption data for power management sites and office buildings. They kept all that data in the cloud. That worked well until they got a bill for hundreds of thousands of dollars from Amazon.

Edge computing and the broader concept of distributed architecture offers a far better solution.

Edge Helps IoT Flourish in the era of Big Data

Some people treat the edge as if it were a foreign, mystical environment. It’s not.

Think of the edge as a commodity compute resource. Better yet, it is located relatively close to the IoT and its devices. Its usefulness is precisely due to its being a “commodity� resource rather than some specialized compute resource. That most likely takes the form of a resource that supports containerized applications. These hide the specific details of the edge environment.

The Edge Environment and Its Benefits

In that sort of edge environment, we can easily imagine a distributed systems architecture where some parts of the system are deployed to the edge. At the edge, they can provide real-time, local data analysis.

Systems architects can dynamically decide which components of the system should run at the edge. Other components would remain deployed in regional or centralized processing locations. By configuring the system dynamically, the system is optimized for execution in edge environments with different topologies.

With this kind of edge environment, we can expect lower latencies. We also achieve better security and privacy with local processing.

Some of this is already getting done now on a one-off basis. But it hasn’t yet been systematized. That means organizations must figure this out on their own by assuming the role of a systems integrator. Instead, they must embrace the edge and help make IoT hum.

The post Why the Edge is Key to Unlocking IoT’s Full Potential appeared first on ReadWrite.

Categories
Cloud cloud computing edge computing Fog computing fog vs. edge IoT

Fog Computing and Its Role in the Internet of Things

Fog computing refers to a decentralized computing structure. The resources, including the data and applications, get placed in logical locations between the data source and the cloud. One of the advantages of fog computing is to keep many users connected to the internet at the same time. In essence, it offers the same network and services that cloud-based solutions provide, but with the added security of a decentralized network.

Difference Between Cloud Computing and Fog Computing

Cloud Computing

Cloud computing refers to the provision of computing and storage resources geographically distributed. Computing can occur over a variety of platforms, including public cloud and private cloud.

The cloud-computing platforms offer the opportunity to share and mix the workloads among the users over a scalable system. Cloud computing is essentially the ability to store and regain data from an off-site location.

Cloud computing is one of the main reasons conventional phones got “smart.” Phones don’t have sufficient, built-in space to store the data necessary to access apps and services. All the data is transmitted from and to the cloud to provide the services we need. Still, cloud computing technology has a challenge – the bandwidth constraint.

Fog Computing

Fog computing will be dominating the industry in the near future. The domination of Fog will be driven by a need to gather data closer to the source of the data (the user device). Devices are not able to perform the necessary processing in the cloud and the devices are physically constrained (low power and small size).

The ability to process the data locally is more important than in the past because fog computing increases the data’s security. With the evolution of the Internet of Things, more and more devices are being added to the network. Each device is wirelessly connected for data transmission and reception.

Fog computing is about how efficiently data is stored and accessed. Fog computing refers to the networking of the edge computing nodes dispersed in a network so that they can be geographically distributed but still provide an organized communication between those nodes.

The use of fog computing involves a complex process of interconnected edge devices. The edge devices include sensors, storage systems, and networking infrastructure that work together to capture and distribute data.

However, the flexibility of fog computing and its ability to gather and process data from both the centralized cloud and the edge devices of a network make it one of the most useful ways of dealing with the information overload we face today.

fog computing                                                                                                                                                                   Image Credit: nikhomk panumas; pexels

 

Are Fog Computing and Edge Computing the Same Thing?

Fog computing is also referred to as “edge computing.”  Edge computing is designed to solve issues by storing data closer to the “ground.” In other words, edge stores data in storage devices and local computers, rather than running all the data through a centralized DC in the cloud.

In essence, fog computing is responsible for allowing fast response time, reducing network latency and traffic, and supporting backbone bandwidth savings in order to achieve better service quality (QoS). It is also intended to transmit relevant data to the cloud.

IDC estimates that about 45 percent of the world’s data will be moved closer to the network edge by the end of 2025. Fog computing is claimed to be the only technology that will be able to withstand artificial intelligence, 5G, and IoT in the coming years.

Another IDC study predicts that edge devices will generate 10 percent of the world’s data even in 2020. Edge devices will fuel the need for more effective solutions for fog computing, resulting in reduced latency.

Edge Computing

Edge computing is, basically, a subset of fog computing. It refers to the data being processed close to where it emerged. Fog computing allows for more effective data processing, thereby reducing the possibility of data latency.

Consider fog computing as the way to process the data from where it is generated to where it is stored. Edge computing refers only to the processing of the data close to where it is generated. Fog computing encapsulates the edge processing and the network connections required to transfer the data from the edge to its end.

With edge computing, IoT devices are connected to devices such as programmable automation controllers. The automation controllers perform data processing, communication, and other tasks. With fog computing, the data is transferred from endpoints to a gateway. Then the data is transferred to sources for processing and return transmission. The geographically distributed infrastructure is aligned with cloud services to enable data analytics with minimal latency.

Both fog and edge computing help to turn data into actionable insights more quickly so that users can make quicker and more informed decisions. Then, fog and edge allow companies to use bandwidth more effectively while enhancing security and addressing privacy concerns. Since fog nodes can be installed anywhere there’s a network connection; fog computing is growing in popularity in industrial IoT applications.

The Role of Fog Computing in IoT

When a device or application generates or collects huge amounts of information, data storage becomes increasingly complex and expensive. When handling this data, network bandwidth also becomes expensive, requiring large data centers to store and share the information.

Fog computing has emerged as an alternative to the traditional method of handling data. Fog computing gathers and distributes resources and services of computing, storage, and network connectivity. It significantly reduces energy consumption, minimizes space and time complexity, and maximizes this data’s utility and performance.

The “Smart City”

Let’s take a smart city as an example. Data centers are not built to handle the demands of smart city applications. The ever-increasing amount of data transmitted, stored, and accessed from all IoT devices in a city will require a new kind of infrastructure to handle this volume. It is these applications that need fog computing to deliver the full value that IoT will bring to them.

Utilities

Water utilities, hospitals, law enforcement, transportation, and emergency management applications in smart cities need the latest data and technology to deliver information and services to support their operations.

Information about water leakages, carbon emissions, potholes, or damage can be used to update billing information, improve operations, save lives, and increase efficiencies. The benefits of capturing and analyzing this data can be directly applied to smart city applications.

Fog computing doesn’t move you from one place to another. Instead, fog is a method for deploying Internet of Things networks where they provide the best return on investment.

Benefits of Using Fog Computing

Fog computing can be used in applications that deal with large volumes of data, network transactions, and fast processing. The benefits of using fog computing include real-time, hybrid, and autonomous data centers that improve operational efficiency and security. Additionally, fog computing can help ensure your systems stay available and optimized without the need to invest in power, data center security, and reliability.

Fog computing reduces overhead costs by concentrating on computing resources across many nodes. The location of the fog nodes is chosen based on their availability, efficiency, and use. It also reduces the load on the data centers of organizations. The reduction in data traffic is another major advantage of fog computing.

Many companies are using fog computing to deploy software applications distributed in many places. Companies deploy many systems over a network to achieve better efficiency and reachability.

Fundamentally, fog computing gives organizations more flexibility to process data wherever it is most necessary to do so.  For some applications, data processing should be as quick as possible, for instance, in manufacturing, where connected machines should respond to an accident as soon as possible.

Fog computing can also provide companies with an easy way to know what their customers or employees are up to in real-time. With the implementation of fog computing, companies can expect to take on new opportunities and increase their profit with IoT technology. But more than that, this technology has the potential to save a lot of money for governments, companies, and even individual users.

Bottom Line

As cloud technologies continue to penetrate into the enterprise environment, fog computing usage will also continue to increase. Cloud computing distributes computing workloads through an elastic computing infrastructure, enabling the real-time processing of data in the cloud.

Edge computing is a major focus area of the IoT fog computing segment. Edge computing is the technology of computing resources deployed at the edge of the network, outside of the cloud. It allows computing resources at the edge of the network to be accessed, analyzed, and then sent back to the network’s edge. This allows for real-time processing of data.

Fog computing solutions will enable companies to implement real-time computing in the Internet of Things. As a result, the IoT fog computing market will be a major contributor to the cloud computing market.

Image Credit: riccardo bertolo; pexels

The post Fog Computing and Its Role in the Internet of Things appeared first on ReadWrite.

Categories
Connected Devices covid-19 Data and Security edge computing Lifestyle

Smart Technology at The Edge: Five Reasons this Transformation is Here to Stay

smart tech at the edge

The COVID-19 pandemic has sped up much of the technology change that was already underway. People have found themselves working and learning at home, which is usually at “the edge” of computing networks operated by their employers and schools.

While at home, many employees are also substantially increasing their consumption of entertainment content. The changes in working conditions and the transformation that comes with it will continue, complemented by innovations in AI, processing, and connectivity, empowering new capabilities and services at such edges.

What is an edge network?

In a sentence, what might qualify as “edges” of networks will continue to evolve into collections of devices that work faster and more often locally versus relying on cloud connectivity. Staying away from some of the connectivity will also make devices safer, more reliable, and more energy-efficient.

The transformation has immense implications for the world, some of which we’ve already witnessed.

Here are five reasons smart technology at the edge is here to stay.

First, technology at the edge gives us participation.

Technology has proven itself to be an indispensable tool for getting people together at times and in ways that may once have seemed impossible.

Work teams and families no longer have to physically share the same room to exchange their thoughts and feelings. Even more profoundly, people who were once locked out of the job market because they couldn’t physically show up for a 9-to-5 job can use tech to work where and when it’s best for them.

Having the ability to access all variations of people and their skills has and will bring many new people into the workforce, such as those who spend a large amount of time caring for children or the elderly, or for those who a commute to work, is not an option.

Second, it enhances productivity.

Stay-at-home orders have meant that people have discovered just how much time they spent commuting to work, which is time they can now utilize being more productive.

The time saved that was once lost to being “in transit” can be put to productive use thanks to edge technologies, which replicate many of the functions (access to data and one another, for instance) that was once exclusively available only at work sites.

It also can deliver greater productivity without a commensurate increase in harmful emissions, not to mention the emotional wear-and-tear that comes with sitting in traffic.

Third, edge computing is more protective of our privacy.

Privacy is better protected since more data collected on our devices is processed and used locally, too, the risk of hackers accessing personal data is reduced.

Additionally, if data is distributed in this way — in homes and offices versus collected on giant shared servers or other platforms – it is a less attractive target for criminals.

The effort required to hack a single home will likely outweigh the benefit of whatever might be uncovered. Further, there are significant security protocols (both hardware and software) that make these events very unlikely.

Fourth, it provides us with safe personal spaces.

It has been amazing how many people have found refuge in driving their cars during the pandemic (as “safe spaces” to be while experiencing the outside world).

The idea of that physical movement, from Point A to Point B, as useful personal or professional time, means a greater need for automation and security.

People are looking at these spaces and “moments” as a personal safe time, and also using these spaces and moments productively creates greater urgency for offloading active driving tasks to smart technologies such as ADAS.

The purposes for our use of vehicles may well be changing, as should our expectations for them. We should be able to expect that our time in our vehicles is as safe, both from viruses and traffic incidents, just as our time at home is both personal and protected.

Finally, the edge creates more possibilities.

Putting computing intelligence and faster processing at the point where data are sensed and collected provides benefits. That suggests a variety of new uses, such as smarter security and environmental controls or, in vehicles, more tools to make driving safer.

Doing so more securely makes it possible to consider novel applications for user data, especially in the areas of Machine Learning and lifestyle services.

Imagine digital home assistants that really and truly understood the needs and expectations of their users. The possibilities are endless.

Conclusion

As the pandemic has focused all aspects of our lives into our homes, the edge has broken down barriers that previously segmented or limited what we could experience in our homes.

Now we can do things faster, more conveniently, more securely, and therefore more often than ever before. Experiencing smart technology at the edge has only begun to show us the possibilities of a world that anticipates and automates our needs.

Image Credit: fauxels; Pexels

The post Smart Technology at The Edge: Five Reasons this Transformation is Here to Stay appeared first on ReadWrite.

Categories
Android Apple Connected Devices edge computing Internet Internet of Things IoT Mobile Open Source tech ecosystem Web

Why IoT Needs an Open Ecosystem to Succeed

iot open ecosystem

Imagine if the internet had been built as a closed ecosystem controlled by a small set of organizations. It would look very different from the internet we know and rely on today. Perhaps this alternate version would run on a pay-per-use model, or lack tools and services that have been developed over the years by independent contributors and scrappy startups. Here is why IoT needs an open ecosystem to succeed.

The Open Internet

Instead, of a closed internet — we mostly enjoy an open internet. This is in part due to its origins: the internet was built to be fundamentally open, and this is what has allowed it to grow, change, and be adopted as quickly as it has been. In fact, the trend of an open approach propelling innovation is one that we see repeatedly for emerging technologies.

When it comes to the Internet of Things (IoT), we’re at the precipice of a similar innovation boom as witnessed with the internet.

IoT is slated for explosive growth: by 2021, Gartner expects that 25 billion connected things will be in use, enabling our smart homes, factories, vehicles, and more.

As more and more IoT devices come online, edge computing will become a necessity. Edge computing enables data to be processed and analyzed in real-time for business-critical use cases, such as self-driving cars, safety and security, and industrial automation.

As with the internet, we need an open, consistent infrastructure foundation for IoT and edge computing in order for these technologies to reach their full potential. While the challenges of building an open IoT are different than those we faced with building an open internet, this is an important problem for our industry to solve now, before we witness further fragmentation and vendor lock-in.

Where we are today with IoT

We’re currently in what I like to call the “AOL stage� of IoT—the phase of getting devices connected at scale, and working through the balance of proprietary vs. open approaches.

Back in the 1990s, America Online opened up access to the internet to the masses with an easy-to-use CD; by popping it in, anyone could easily sign up and get connected. However, the tradeoff for this simplicity was getting locked into the AOL ecosystem as the conduit for communication and search.

Over time, users became savvier, realizing they could connect to the internet directly through their ISPs and access more powerful search capabilities (Google, for example). As more people came online through their medium of choice, innovation picked up speed, giving birth to the internet boom and the ecosystem we know today.

IoT is inherently heterogeneous and diverse, made up of a wide variety of technologies and domain-specific use cases.

To date, the market has created a dizzying landscape of proprietary IoT platforms to connect people and operations, each with wildly different methods for data collection, security, and management. It’s like having many different “AOLs� trying to connect devices to the internet—needless to say, this fragmentation has resulted in unnecessary complications.

Companies beginning their IoT journeys are locked in with the vendor they start with, and will be subject to additional costs or integration issues when they look to scale deployments and take on new use cases. Simply put, IoT’s diversity has become a hindrance to its own growth. 

To avoid going down this path, we must build an open ecosystem as our foundation for IoT and edge computing. It’s only when open standards are set that we can scale the commercialization of offerings and services, and focus on realizing ROI.

Open ecosystems facilitate scale

What would an open ecosystem for IoT look like? When creating an ecosystem, there’s a spectrum of approaches you can take, ranging from closed to open philosophies. Closed ecosystems are based on closely governed relationships, proprietary designs, and, in the case of software, proprietary APIs.

The tight control of closed ecosystems sometimes referred to as “walled gardens,� can provide great customer experience, but come with a premium cost and less choice. Apple is a widely cited example of this approach. 

There are open approaches that offer APIs and tools that you can openly program.

The open approach tools enable an ecosystem of products and services where the value is derived from the sum of its parts.

Open-source software like Android is an example; it’s a key driver of a truly open, vendor-neutral ecosystem because of how it empowers developers. Having an open standard like Android’s operating system for developers to build upon not only promotes further innovation but also bolsters a network effect. 

To fully grasp the business trade-offs of closed vs. open ecosystems, let’s compare Android and Apple’s iOS. While Apple provides a curated experience, Android device makers have less control over the overall experience through deep software/hardware integration, and therefore need to find other differentiators.

Nevertheless, openness facilitates choice and scale—Android has over 70 percent of the global mobile OS market share. Even with Android’s openness, providers like Samsung have still been able to carve out market share by investing in innovation and a broader device ecosystem strategy.

An open future for the IoT

The IoT can have as great of an impact as the internet has had, but generating hundreds of closed, siloed ecosystems dictated by vendor choice is not the path to scale. A bright future for IoT is dependent upon our ability to come together as an industry to build an open ecosystem as our foundation.

Across hardware, operating systems, connectivity, applications, and cloud, we must bridge key elements and unify, rather than reinvent, standards in order to empower developers to focus on value creation.

Commercial offerings built on top of that open foundation may very well take a more “closed� approach; however, starting development with an open foundation will always provide the most scalability, flexibility, and transparency to maximize options for the long term.

Open-source collaboration is an excellent accelerator for this open foundation. The Linux Foundation’s LF Edge and Kubernetes IoT Edge Working Group, and the Eclipse Foundation’s IoT and Edge Native Working Groups are just a few of the initiatives exploring architectures and building frameworks to unite industry efforts and enable IoT and edge computing ecosystems to scale.

As they say, the whole can be greater than the sum of its parts, and I look forward to seeing the immense potential of becoming a reality when we have a common foundation to innovate on.

The post Why IoT Needs an Open Ecosystem to Succeed appeared first on ReadWrite.