- Serverless Computing 101
- Architectural Comparison with SaaS, PaaS, IaaS, and VPS/CoLo servers
- Benefits and Issues
- Why: Cloud Deployment is Easier
- Why: Lower Cost
- Why: Elasticity versus scalability
- Why: DevOps - Easier to manage - Git Hooks
- Why: Productivity of development - "exposed as systems"
- Why Not: Performance can Lag in Startup
- Why Not: Resource limits
- Why Not: Monitoring and debugging
- Why Not: Security is harder
- Why Not: Privacy
- Why Not: Lack of Standards for Business Logic and Portability
- Why Not: Vendor lock-in and Problems in Migration
- Open Source Serverless
- Serverless Functions - Platforms and Majors
- Serverless databases
Serverless Computing 101
What - most are Function as a service (FaaS) platforms
Serverless computing is a cloud-computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources.
Architectural Comparison with SaaS, PaaS, IaaS, and VPS/CoLo servers
Overview of Cloud Hosting
On Prem Facilities. Essentially onPrem is on a business' own facilities, where they manage their equipment just like internal IT or mainframe/minicomputer resources.
CoLocated, or Cage based Self-managed servers. These are somewhat like onprem but instead of being on a company's own premises, are closer to the internet backbone in cages as self managed. But to keep it simple, they can be switched on or off by remote management instead of doing by physical access.
VPS. Further, virtualization allows sharing eg 1 CPU-1 G memory-100 Mbps network-10Gb disk, slices is pretty common for servers.
IaaS is Infrastructure as a service basically cloud version of "iron" where CPUs, Storage, Networking etc. are provided as packages. This is the basic concept of AWS, where scaling
Platform as a service (PaaS) application hosting services, in that such application hosting services also hide "servers" from developers. However, such hosting services typically always have at least one server process running that receives external requests. Scaling is achieved by booting up more server processes, which the developer is typically charged directly for. Consequently, scalability remains visible to the developer.
API-as-a-Service, Web Services or Micro-Services.
FaaS or Function as a service is a finer sliced version of PaaS. FaaS by contrast does not require any server process constantly being run. While an initial request may take longer to be handled than an application hosting platform (up to several seconds), caching may enable subsequent requests to be handled within milliseconds. As developers only pay for function execution time (and no process idle time), lower costs at higher scalability can be achieved (at the cost of latency).
SaaS - software as a service. Like Shopify, Salesforce, etc. these are ready to go services. Business customers don't care or aware of inner workings. These software may or may not be customized.
B2B, Supply Chain Mega-Meta Networks, Enterprise Service Buses
Benefits and Issues
Why: Cloud Deployment is Easier
Serverless computing can simplify the process of deploying code into production. Scaling, capacity planning and maintenance operations may be hidden from the developer or operator.
Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.
Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It can be a form of utility computing.
Why: Lower Cost
Serverless can be more cost-effective than renting or purchasing a fixed quantity of servers, which generally involves significant periods of underutilization or idle time.
It can even be more cost-efficient than provisioning an autoscaling group, due to more efficient bin-packing of the underlying machine resources.
This can be described as pay-as-you-go computing or bare-code as you are charged based solely upon the time and memory allocated to run your code; without associated fees for idle time.
Immediate cost benefits are related to the lack of operating systems costs, including: licences, installation, dependencies, maintenance, support, and patching.
Why: Elasticity versus scalability
A serverless architecture means that developers and operators do not need to spend time setting up and tuning autoscaling policies or systems; the cloud provider is responsible for scaling the capacity to the demand. As Google puts it: ‘from prototype to production to planet-scale.’
As cloud native systems inherently scale down as well as up, these systems are known as elastic rather than scalable.
Why: DevOps - Easier to manage - Git Hooks
Small teams of developers are able to run code themselves without the dependence upon teams of infrastructure and support engineers; more developers are becoming DevOps skilled and distinctions between being a software developer or hardware engineer are blurring.
Why: Productivity of development - "exposed as systems"
With function as a service, the units of code exposed to the outside world are simple functions. This means that typically, the programmer does not have to worry about multithreading or directly handling HTTP requests in their code, simplifying the task of back-end software development.
Why Not: Performance can Lag in Startup
Infrequently-used serverless code may suffer from greater response latency than code that is continuously running on a dedicated server, virtual machine, or container. This is because, unlike with autoscaling, the cloud provider typically "spins down" the serverless code completely when not in use. This means that if the runtime (for example, the Java runtime) requires a significant amount of time to start up, it will create additional latency.
Why Not: Resource limits
Serverless computing is not suited to some computing workloads, such as high-performance computing, because of the resource limits imposed by cloud providers, and also because it would likely be cheaper to bulk-provision the number of servers believed to be required at any given point in time.
Why Not: Monitoring and debugging
Diagnosing performance or excessive resource usage problems with serverless code may be more difficult than with traditional server code, because although entire functions can be timed, there is typically no ability to dig into more detail by attaching profilers, debuggers or APM tools. Furthermore, the environment in which the code runs is typically not open source, so its performance characteristics cannot be precisely replicated in a local environment.
Why Not: Security is harder
Serverless is sometimes mistakenly considered as more secure than traditional architectures. While this is true to some extent because OS vulnerabilities are taken care of by the cloud provider, the total attack surface is significantly larger as there are many more components to the application compared to traditional architectures and each component is an entry point to the serverless application. Moreover, the security solutions customers used to have to protect their cloud workloads become irrelevant as customers cannot control and install anything on the endpoint and network level such as an intrusion detection/prevention system (IDS/IPS).
This is intensified by the mono-culture properties of the entire server network. (A single flaw can be applied globally.)
The "solution to secure serverless apps is close partnership between developers, DevOps, and AppSec, also known as DevSecOps. Find the balance where developers don’t own security, but they aren’t absolved from responsibility either. Take steps to make it everyone’s problem. Create cross-functional teams and work towards tight integration between security specialists and development teams. Collaborate so your organization can resolve security risks at the speed of serverless. - Protego
Why Not: Privacy
Many serverless function environments are based on proprietary public cloud environments. Here, some privacy implications have to be considered, such as shared resources and access by external employees. However, serverless computing can also be done on private cloud environment or even on-premises, using for example the Kubernetes platform. This gives companies full control over privacy mechanisms, just as with hosting in traditional server setups.
Why Not: Lack of Standards for Business Logic and Portability
Serverless computing is covered by International Data Center Authority (IDCA) in their Framework AE360.
- However, the part related to portability can be an issue when moving business logic from one public cloud to another for which the Docker solution was created. Cloud Native Computing Foundation (CNCF) is also working on developing a specification with Oracle.
Why Not: Vendor lock-in and Problems in Migration
Serverless computing is provided as a third-party service. Applications and software that run in the serverless environment are by default locked to a specific cloud vendor.
Therefore, serverless can cause multiple issues during migration.
- SRC Serverless computing - Wikipedia
- FaaS, PaaS, and the Benefits of the Serverless Architecture
- Serverless Architectures
Open Source Serverless
Project Riff - used in Pivotal Function
Project Riff is an open source serverless platform implementation built on Kubernetes by Pivotal Software. Project Riff is the foundation of Pivotal Function Service.
OpenWhisk - Open Source
OpenWhisk was initially developed by IBM with contributions from RedHat, Adobe, and others. OpenWhisk is the core technology in IBM Cloud Functions.
Serverless Framework - Nodejs
The Serverless Framework is a free and open-source web framework written using Node.js. A Serverless app can simply be a couple of lambda functions to accomplish some tasks, or an entire back-end composed of hundreds of lambda functions. Serverless supports all runtimes offered within the cloud provider chosen.
Serverless was the first framework developed for building applications on AWS Lambda, a serverless computing platform provided by Amazon as a part of Amazon Web Services. Now, applications developed with Serverless can be deployed to other function as a service providers, including Microsoft Azure with Azure Functions, IBM Bluemix with IBM Cloud Functions based on Apache OpenWhisk, Google Cloud using Google Cloud Functions, Oracle Cloud using Oracle Fn, Kubeless based on Kubernetes, Spotinst and Webtask by Auth0.
It was first introduced in October 2015 under the name JAWS. Serverless is developed by Austen Collins and maintained by a full-time team.
- Serverless Framework - Wikipedia
- Github Serverless Framework – Build web, mobile and IoT applications using AWS Lambda, Azure Functions, Google CloudFunctions
Serverless Functions - Platforms and Majors
Google App Engine and Google Cloud
In 2008, Google released Google App Engine, which featured metered billing for applications that used a custom Python framework, but could not execute arbitrary code.
Google Cloud Platform offers Google Cloud Functions since 2016.
PiCloud - seems inactive??
PiCloud, released in 2010, offered FaaS support for Python.
Introduced by Amazon in 2014, Lambda was the first public cloud infrastructure vendor with an abstract serverless computing offering.
IBM Cloud Functions
IBM offers in the public IBM Cloud since 2016.
Microsoft Azure offers both in the Azure public cloud or on-premises via Azure Stack.
Oracle Fn Project
Oracle introduced Fn Project, an open source serverless computing framework offered on Oracle Cloud Platform and available on GitHub for deployment on other platforms.
Several serverless databases have emerged in the last few years. These systems extend the serverless execution model to the RDBMS, eliminating the need to provision or scale virtualized or physical database hardware.
Aurora offers a serverless version of its databases, based on MySQL and PostgreSQL, providing on-demand, auto-scaling configurations.
Azure Data Lake
This is a highly scalable data storage and analytics service. The service is hosted in Azure, Microsoft's public cloud. Azure Data Lake Analytics provides a distributed infrastructure that can dynamically allocate or de-allocate resources so customers pay for only the services they use.
Google Cloud Datastore
Google Cloud Datastore is an eventually-consistent document store. It offers the database component of Google App Engine as a standalone service. Firebase, also owned by Google, includes a hierarchical database and is available via fixed and pay-as-you-go plans.
FaunaDB - GraphQL API