Microsoft has lately been working on enhancing it’s cloud computing services. 2018 new year just approaching in about a month, we can say that 2017 was a great year for Azure. They have been introducing new capabilities and improving the existing ones.
At the Structure conference in San Francisco last week, Tom Krazit from GeekWire was able to talk to Mark Russinovich, the CTO of Microsoft Azure about intersection of serverless computing techniques and Edge Computing. For your reading convenience, we have embedded the interview conversation below :
GeekWire: One thing I‚Äôm having a little bit of trouble understanding is that serverless is supposed to offer a degree of portability to other cloud providers (according to some backers). But, it seems like it‚Äôs almost the opposite, where if you build around a certain provider‚Äôs functions it‚Äôs not going to be that easy to migrate what you‚Äôve just done to another function provider. Mark Russinovich: I think that if you take a look at the serverless offerings ‚ÄĒ ours, Azure Functions, (Amazon Web Services‚Äô) Lambda, Google Cloud Functions, OpenWhisk ‚ÄĒ they‚Äôre all different. Basically, they‚Äôre application environments, so you need to write code and operations that work around those specific offerings. As far as the portability of it, I think when you talk about serverless, you‚Äôre talking about the ‚Äėas a service‚Äô aspect as a key part of it. That also feeds into the operational requirements and design of an application that‚Äôs leveraging serverless: what capability does the platform offers, what SLAs it has, what kind of performance it delivers. When it comes to portability, meaning, ‚Äúcan I run this Azure function outside of Azure?‚ÄĚ this has been an explicit goal of ours. It‚Äôs open source, so I can actually take it and put that runtime on anything. I can run it on other clouds, for example. But you don‚Äôt have the ‚Äėas-a-service‚Äô aspect of it at that point. If you take a look at serverless ‚ÄĒ the pure, canonical definition that we‚Äôve seen the industry adopt ‚ÄĒ it is event-driven, meaning it auto-scales and then it‚Äôs micro-billed. But if you‚Äôre going run it on your own infrastructure, micro-billing doesn‚Äôt apply, and serverless doesn‚Äôt necessarily apply, because you‚Äôre going to have to manage the infrastructure underneath it and write the scaling logic. We believe in, actually, a broader definition, and that is that serverless is typically micro-billed, auto scaled (apps). But auto-scaled and event-driven generally apply to only short-lived pieces of code. Functions typically run for less than a second. We believe in serverless expanding, then including long-running code. Long-running code where the platform is doing auto-scaling, turning things off and turning things on, as it learns and understands the application‚Äôs requirements. If you take a look at Azure Container Instances, that‚Äôs an example of us stretching the definition of serverless. You can take a container, a Docker container, and deploy it into Azure. You don‚Äôt specify any virtual machines. You pay only for the time that that container is running and it could be a long-running container, not spinning up in response to some event. GeekWire: What do you consider long-running? Russinovich: Minutes, hours. GeekWire: In what cases would you want to do something built around events or functions that runs for hours? Russinovich: Well if you‚Äôve got background aspects of the application, you don‚Äôt want to specifically dedicate virtual machines to it, you want to spin these things up really quickly and have them run where you‚Äôre only paying for the resources as you need them. And you‚Äôre not worried about infrastructure underneath. Does Azure support an an open serverless model that could run across different clouds with the understanding that I‚Äôm going to manage the hardware? Russinovich: For Azure functions, that runtime is in Github. Somebody can take it and run it wherever, and then run their own thing. GeekWire: Do you think serverless is the model for edge? Is it the way that edge would be deployed? Russinovich: No, not necessarily. The definition of serverless is a little bit [pauses] not well-defined. When I take a look at the edge platform that we see developing, there‚Äôs a spectrum of devices and capabilities ‚ÄĒ from the tiny little device that is capable of running just a small piece of code, to PC-class devices, to edge deployments like Azure Stack, which is a very rich, highly available, functional system with PaaS (platform-as-a-service) services. If you take a look at the computation that people will be pushing out, it‚Äôs not just serverless. There‚Äôs going be long-running machine-learning modules, there‚Äôs going to be Docker containers executing long-running code that‚Äôs part of the application, and there‚Äôs going to be parts of the application that are functions. If you take a look at where functions are fitting into cloud application architecture today, a lot of times they serve as glue between the application and the outside world, or glue between different components in the application. But there‚Äôs long-running aspects to the application that functions are supporting. GeekWire: (Long, rambling and, in hindsight, quite-convoluted question.) Russinovich: So you‚Äôre saying, why (do) serverless on the edge? GeekWire: Yeah. Russinovich: If you take a look at on the edge, there‚Äôs events that you want to respond to. Having custom pieces of code to respond to those events, they‚Äôre effectively very specialized microservices, event-driven microservices. So it‚Äôs a nice way to decompose an architecture into, ‚ÄúHere‚Äôs the long-running stuff that‚Äôs just going to be continuously doing things,‚ÄĚ and ‚ÄúHere‚Äôs the stuff that occasionally will get executed, and here‚Äôs a tiny piece of code responsible for just that event.‚ÄĚ But I think (serverless is) more of a developer architecture, or application architecture, benefit. Especially when you‚Äôre out on the edge and you‚Äôre not taking advantage of the microbilling and the elasticity that you get in the cloud, because out on the edge, you‚Äôve got a fixed source of hardware resources. There, it‚Äôs just the convenience of event driven architectures. GeekWire: With the fixed set of resources, does serverless give you more bang for your buck that way, than if you ran a more traditional application architecture? Russinovich: It definitely can. There‚Äôs considerations as you design your architecture, given there‚Äôs a certain amount of RAM, for example, on the device. I could deploy a bunch of Docker containers and (for) each one specify RAM and they‚Äôre going to be basically assigned that RAM regardless if whether they need to use it or not. With functions, I can have them spin up and spin down and then share the resources: when one‚Äôs not active, another one can use it. I still have to be careful as I design my application to make sure that for the throughput and latency requirements of the application on the edge device, that I don‚Äôt run into a bottleneck on something where lots of functions want to or need to run at the same time, and now I don‚Äôt have enough resources to run them. I think that we‚Äôre at the first steps of this. Right now, if you take a look at the edge computing, we‚Äôre going from a world where, ‚Äúit‚Äôs a fixed function. I wrote this piece of code, it is burned into the thing basically, and I never touch it,‚ÄĚ to one that‚Äôs now, ‚ÄúI‚Äôve decomposed your application, I‚Äôm pushing it out and it‚Äôs a dynamically updatable piece of hardware.‚ÄĚ (And then in the future) to, ‚ÄúI want to actually take a modern microservices application architecture and run them, and I want to do so reliably and I want to do so across a fleet of devices that might have different resources, levels of resource, but I want to have a high degree of confidence that when I take this application and push it out there, it‚Äôs actually going to behave OK.‚ÄĚ We‚Äôre starting this journey down that (path). It‚Äôs going take a while to get that level of sophistication to show up, but I think that there‚Äôs a really, definitely promising future there as edge becomes more and more strategic in the ability to have rich applications out on the edge. GeekWire: I was going to ask: is anybody actually doing this? Russinovich: We‚Äôve seen a tremendous amount of interest in this as devices are getting more and more intelligent. A big driver of this actually is machine learning on the edge. AI on the edge. We‚Äôre seeing a lot of interest in that. This is why you‚Äôre seeing us being able to train a model in the cloud, take that model, and push it out to the edge. Or push it out to a mobile device and run it there locally. The model is part of a larger application typically. And I want to compose that model as a self contained unit, but then have it interact with other things. For example, in an event-driven architecture, the model could be doing something like, ‚ÄúI‚Äôm detecting objects. And there are certain objects when I see them I want to take certain actions on.‚ÄĚ Now that becomes an event: ‚ÄúI see this object, I signal a generated event and then that causes a function that‚Äôs responsible. Hey, you see this object? Execute this piece of code.‚ÄĚ GeekWire: What‚Äôs a real-world application for this kind of scenario? Russinovich: There are lots of retail scenarios where you want it. Actions to be triggered based on things that are happening in the warehouse environment and the store environment, at the checkout line; those are definitely places that we‚Äôve seen customers going and taking advantage of this, or wanting to. You don‚Äôt want to standardize before everybody has finished evolving and innovating, because that just stifles it. GeekWire: Will edge computing mostly be event driven in your opinion, or would there be a reason to run a more traditional approach? Russinovich: I think that they‚Äôll be both. Like image detection; there‚Äôs object detection that I want on the edge. There will be something continuously running that we‚Äôll be looking at, generating and capturing images and processing them. And then there‚Äôll be some events generated off of what is the environment is doing. So there will be both long-running things and event-driven pieces of the architecture. GeekWire: Do you think there will be (a industry-standards approach to serverless)? Russinovich: I don‚Äôt know. People have talked about (the fact that) at some point it might get standardized. I think we‚Äôre so early in this. You don‚Äôt want to standardize before everybody has finished evolving and innovating, because that just stifles it. Nobody knows what is the core that we should be standardizing on yet. It‚Äôs hard to say how that‚Äôs going to evolve. Because if you take a look at the way that we‚Äôve evolved serverless, Azure Functions, there are big differences between it and (Amazon Web Services‚Äô) Lambda. The bindings that we‚Äôve got are something that you don‚Äôt see in other functions‚Äô runtimes. Now, will everybody decide that they agree with that and that they all want that, and we standardize on something like that? That remains to be seen. And I don‚Äôt think (Microsoft is) done with those kind of innovations either yet.
In this interview all he highlighted was how they are open sourcing their technology to make it more adaptable, portable and how other they are matching up with their competitors like Amazon Web Services, Google Cloud and more. It seems like all they want is to adapt how the developer uses their services by supporting their favorite services like GitHub.