Are you an Edgio customer facing uncertainty? Discover how Macrometa ensures continuity and delivers high-performance solutions for your business.
Pricing
Log inTalk to an expert
Edge

Beyond Cloud Computing: The Advent Of GDN

Post Image

Cloud is mainstream today, and its evolution ushered in several new businesses, business models & jobs like SRE, DevOps, CloudOps, etc. But conceptually, the model is a client-server model of the 1980s with increasing distance between client & server in each decade, i.e., initially, it was Client → Server (where servers were a few feet from the user), then Client → Data Centers (where servers were 10s and 100s of miles from the user) and finally Client → Cloud (where servers were 1000s of miles from the user).

On the other hand, the demands of enterprise businesses today are very different from what they were in the 80s and 90s. In the 80s, software was a niche area. Today, it is the main driver for companies, and survival/failure depends on how well the company can see the shifts, utilize the right tech and innovate.

Let’s understand this from an example:

  • In the '90s, 100 MB of data is a big deal. I still recall the excitement of getting 5.25-inch high-density floppy disks that could store 1.2 MB of data. Today, 100 TB of data is frivolity.
  • Similarly, in the '90s, the concept of digital data privacy or data regulations was insignificant. Today, we have the GDPR law, California Law, and many more regulations to secure data.
  • Also, can you recall how many enterprises in the 80s and 90s were providing services globally, dealing with millions of users, not many, right? Today, pretty much all companies have their business serving global users.

Companies that could see the shifts early in the 2000s and 2010s adopted the cloud when it was in infancy and benefited handsomely (ex: Netflix). While others found seemingly rational reasons not to adopt and got decimated (ex: Blockbuster). In my opinion, enterprises today have a choice - see the seismic shifts happening now and embrace the next evolution or get left behind due to fear.

This blog post concentrates on how these shifts translate to what is needed beyond the current cloud computing.

Moving Data vs Moving Compute

From client→server days in the 80s to current cloud computing, the default approach is to move the data from the source to where the services are running. Whether the distance to move the data is 100 miles or 10,000 miles, it does not really matter.

The challenge is that new-age enterprises generate data in terabytes, petabytes, and zettabytes. It’s not just IoT / IIoT / Smart Manufacturing, but it also includes retail, gaming, and other industries where millions of users request or generate data via forms, clickstreams, logs, etc. Moving this data 1000s of miles to a distant cloud location is a recipe for a bad user experience, a slower time to act, and a host of other issues like low reliability, network congestion, high computing costs, latency issues, etc. It does not matter whether it is event-driven or request-response workload.

A better alternative is moving computing closer to the source of the data, especially for use cases that deal with interaction, personalization, real-time analytics & actuation, complex event processing, and real-time log processing, etc. This will help process the data closer to the source and turn data into dollars quickly by providing a better user experience, personalization, faster actuation, and reduced egress and other cloud costs. But, there are use cases where the moving data to compute makes more sense, and the current cloud model is a good fit. Example - Batch analytics use cases, long-term storage, etc.

Privacy First vs Privacy Last

Currently, enormous regulatory activities are happening around the globe to protect their citizens (or to monitor their citizens if it is not a democracy) from cyber crime and attacks. It is safe to assume that we’re just starting on these regulations and compliance path and it’ll only become more taxing in the coming years.

On the other hand, today's data platforms don’t deal with privacy. Similarly, most enterprise services approach privacy as an add-on at the end to comply with regulations. The challenges that come with doing it in this way (as an add-on) are poor architecture, humongous development & operational costs, technical complexities, and many more.

So, even in this scenario, a better alternative for data platforms (databases, search, file systems) is to natively support privacy capabilities so that architects and developers can leverage them right from the beginning. This can enable developers to easily geo-fence the data and compute to designated locations, intelligent routing of requests, real-time tokenization, pseudo-anonymization on demand, etc.

Point Services vs Converged Services

Best way to understand this use case is with an example.

Say we’re at a coffee shop and want to grab some coffee, you go ahead and order like - “One Espresso Macchiato in a tall cup, please”. I order with a straight face and order like this - “1 cup 2% milk, one shot of espresso, one tablespoon of sugar, one dollop of foam mixed with these and heated for 3 min, then pour in a tall cup, please.”

Interestingly the latter approach is what most people do when building their services to achieve business outcomes. The following is a sample reference architecture to build an e-commerce application using AWS services -

Reference - e-commerce application architecture on AWS cloud

Building apps using point services like the above is akin to how I ordered coffee when we went to a coffee shop. Unfortunately, the cloud providers have done a great job convincing many folks that this is the right technical approach. The folks who benefit from this model are cloud providers, consultants selling expertise, and point solution vendors. It may not be for someone who’s a CTO/CIO/VP/Architect, who needs to deliver outcomes with the least complexity and higher developer velocity.

Again, a better alternative here is to utilize converged services that provide multiple capabilities (like database, search, graphs, streams, and stream processing) from one interface on a single copy of data. Cloud providers cannot do this from a single copy of data because they have to re-architect their services.

Single vs Multi

Today, for a developer to run a service, the first thing they need to do is select a cloud and within that cloud a specific region to deploy and run the service. There are two big reasons for this -

  • Cloud services are proprietary and hence making it very hard to do multi-cloud.
  • Data & compute services are centralized architectures making it impossible to do multi-region. The centralized architectures use consensus protocols like Raft, Paxos, etc., which rely on synchronous communication between nodes. So don’t be misled by multi-region claims of centralized architectures. They do not scale well to multi-regions and certainly not when you need them across the globe.

The above characteristics certainly benefit the cloud providers, though. The more lock-in, the better. On the other hand, the problems that architects and developers have are things like how to provide disaster recovery with the least effort, how to route requests to the right location or cloud, how to minimize the egress costs, and finally, how to maximize the developer velocity and spend more time on things that move the needle for business instead of unnecessary ops complexity.

A better alternative is to free the developers from the underlying complexity and the need to choose a cloud or location. This is not radical. CDNs were already doing this for the last two decades, albeit for static data. What is needed is CDN-like capabilities but for the more complex services like databases, search, streams, stream processing, computing, etc. We call that GDN, i.e., Global Data Network :-)

Closing thoughts: theory vs practice

I’m, first and foremost, a practitioner. Not a writer/analyst. So as any practitioner does, I spent the last five years building a Global Data Network to address the above challenges. If you want to learn more about Macrometa GDN, then I selfishly and highly recommend learning more about Macrometa.

Related Posts