Faasm: Lightweight Isolation for Efficient Stateful Serverless Computing

Abstract

Serverless computing is an excellent fit for big data processing because it can scale quickly and cheaply to thousands of parallel functions. Existing serverless platforms isolate functions in ephemeral, stateless containers, preventing them from directly sharing memory. This forces users to duplicate and serialize data repeatedly, adding unnecessary performance and resource costs. We believe that a new lightweight isolation approach is needed, which supports sharing memory directly between functions and reduces resource overheads.

We introduce Faaslets, a new isolation abstraction for high-performance serverless computing. Faaslets isolate the memory of executed functions using software-fault isolation (SFI), as provided by WebAssembly while allowing memory regions to be shared between functions in the same address space. Faaslets can thus avoid expensive data movement when functions are co-located on the same machine. Our run�time for Faaslets, FAASM, isolates other resources, e.g. CPU and network, using standard Linux cgroups, and provides a low-level POSIX host interface for networking, file system access, and dynamic loading. To reduce initialization times, FAASM restores Faaslets from already-initialized snapshots. We compare FAASM to a standard container-based platform and show that, when training a machine learning model, it achieves a 2× speed-up with 10× less memory; for serving machine learning inference, FAASM doubles the throughput and reduces tail latency by 90%.

无服务器计算非常适合大数据处理,因为它可以快速且廉价地扩展到数千个并行功能。现有的无服务器平台将功能隔离在短暂的无状态容器中,防止它们直接共享内存。这迫使用户重复复制和序列化数据,增加了不必要的性能和资源成本。我们认为需要一种新的轻量级隔离方法,它支持直接在函数之间共享内存并减少资源开销。

我们介绍了 Faaslets,这是一种用于高性能无服务器计算的新隔离抽象。 Faaslet 使用 WebAssembly 提供的软件故障隔离 (SFI) 来隔离已执行函数的内存,同时允许在同一地址空间中的函数之间共享内存区域。因此,当功能位于同一台机器上时,Faaslet 可以避免昂贵的数据移动。我们的 Faaslets 运行时 FAASM 隔离了其他资源,例如CPU 和网络,使用标准 Linux cgroups,并为网络、文件系统访问和动态加载提供低级 POSIX 主机接口。为了减少初始化时间,FAASM 从已经初始化的快照中恢复 Faaslet。我们将 FAASM 与基于容器的标准平台进行了比较,结果表明,在训练机器学习模型时,它实现了 2 倍的加速,而内存减少了 10 倍;为了服务机器学习推理,FAASM 将吞吐量翻了一番,并将尾部延迟减少了 90%。

Introduction

Serverless computing is becoming a popular way to deploy data-intensive applications. A function-as-a-service (FaaS) model decomposes computation into many functions, which can effectively exploit the massive parallelism of clouds. Prior work has shown how serverless can support map/reduce-style jobs [42, 69], machine learning training [17, 18] and inference [40], and linear algebra computation [73, 88]. As a result, an increasing number of applications, implemented in diverse programming languages, are being migrated to serverless platforms.

无服务器计算正在成为部署数据密集型应用程序的流行方式。 功能即服务 (FaaS) 模型将计算分解为许多功能,可以有效地利用云的大规模并行性。 之前的工作已经展示了无服务器如何支持 map/reduce 式作业 [42, 69]、机器学习训练 [17, 18] 和推理 [40] 以及线性代数计算 [73, 88]。 因此,越来越多以不同编程语言实现的应用程序正在迁移到无服务器平台。

Existing platforms such as Google Cloud Functions [32], IBM Cloud Functions [39], Azure Functions [50], and AWS Lambda [5] isolate functions in ephemeral, stateless containers. The use of containers as an isolation mechanism introduces two challenges for data-intensive applications, data access overheads, and the container resource footprint.

Google Cloud Functions [32]、IBM Cloud Functions [39]、Azure Functions [50] 和 AWS Lambda [5] 等现有平台将函数隔离在短暂的无状态容器中。使用容器作为隔离机制给数据密集型应用程序带来了两个挑战,数据访问开销和容器资源占用。

Data access overheads are caused by the stateless nature of the container-based approach, which forces states to be maintained externally, e.g. in object stores such as Amazon S3 [6], or passed between function invocations. Both options incur costs due to duplicate data in each function, repeated serialization, and regular network transfers. This results in current applications adopting an inefficient “data-shipping architecture”, i.e. moving data to the computation and not vice versa—such architectures have been abandoned by the data management community many decades ago [36]. These overheads are compounded as the number of functions increases, reducing the benefit of unlimited parallelism, which is what makes serverless computing attractive in the first place.

数据访问开销是由基于容器的方法的无状态性质引起的,它强制在外部维护状态,例如在 Amazon S3 [6] 等对象存储中,或在函数调用之间传递。由于每个函数中的重复数据、重复序列化和定期网络传输,这两个选项都会产生成本。这导致当前的应用程序采用低效的“数据传输架构”,即将数据移动到计算中,反之亦然——这种架构在几十年前就被数据管理社区抛弃了 [36]。随着函数数量的增加,这些开销会变得更加复杂,从而减少了无限并行的好处,而这正是使无服务器计算具有吸引力的首要原因。

The container resource footprint is particularly relevant because of the high-volume and short-lived nature of serverless workloads. Despite containers having a smaller memory and CPU overhead than other mechanisms such as virtual machines (VMs), there remains an impedance mismatch between the execution of individual short-running functions and the process-based isolation of containers. Containers have start-up latencies in the hundreds of milliseconds to several seconds, leading to the cold-start problem in today’s serverless platforms [36, 83]. The large memory footprint of containers limits scalability—while technically capped at the process limit of a machine, the maximum number of containers is usually limited by the amount of available memory, with only a few thousand containers supported on a machine with 16 GB of RAM [51].

由于无服务器工作负载的高容量和短期特性,容器资源占用尤其重要。尽管容器比其他机制(例如虚拟机 (VM))具有更小的内存和 CPU 开销,但在单个短期运行功能的执行与基于进程的容器隔离之间仍然存在阻抗不匹配。容器的启动延迟在数百毫秒到几秒之间,导致当今无服务器平台的冷启动问题 [36, 83]。容器的大内存占用限制了可扩展性——虽然在技术上受限于机器的进程限制,但容器的最大数量通常受可用内存量的限制,在具有 16 GB RAM 的机器上仅支持几千个容器[51]。

Current data-intensive serverless applications have addressed these problems individually but never solved both— instead, either exacerbating the container resource overhead or breaking the serverless model. Some systems avoid data movement costs by maintaining states in long-lived VMs or services, such as ExCamera [30], Shredder [92], and Cirrus [18], thus introducing non-serverless components. To address the performance overhead of containers, systems typically increase the level of trust in users’ code and weaken isolation guarantees. PyWren [42] reuses containers to execute multiple functions; Crucial [12] shares a single instance of the Java virtual machine (JVM) between functions; SAND [1] executes multiple functions in long-lived containers, which also run an additional message-passing service; and Cloudburst [75] takes a similar approach, introducing a local key-value-store cache. Provisioning containers to execute multiple functions and extra services amplifies resource overheads and breaks the fine-grained elastic scaling inherent to serverless. While several of these systems reduce data access overheads with local storage, none provide shared memory between functions, thus still requiring duplication of data in separate process memories.

当前的数据密集型无服务器应用程序单独解决了这些问题,但从未解决过这两个问题——相反,要么加剧容器资源开销,要么破坏无服务器模型。一些系统通过在长期存在的 VM 或服务中维护状态来避免数据移动成本,例如 ExCamera [30]、Shredder [92] 和 Cirrus [18],从而引入了非无服务器组件。为了解决容器的性能开销,系统通常会提高对用户代码的信任度并削弱隔离保证。 PyWren [42] 重用容器来执行多个功能; Crucial [12] 在函数之间共享 Java 虚拟机 (JVM) 的单个实例; SAND [1] 在长寿命容器中执行多个功能,这些容器还运行额外的消息传递服务; Cloudburst [75] 采用了类似的方法,引入了本地键值存储缓存。配置容器以执行多个功能和额外服务会放大资源开销并打破无服务器固有的细粒度弹性扩展。虽然这些系统中有几个通过本地存储减少了数据访问开销,但没有一个提供功能之间的共享内存,因此仍然需要在单独的进程内存中复制数据。

Other systems reduce the container resource footprint by moving away from containers and VMs. Terrarium [28] and Cloudflare Workers [22] employ software-based isolation using WebAssembly and V8 Isolates, respectively; Krustlet [54] replicates containers using WebAssembly for memory safety, and SEUSS [16] demonstrates serverless unikernel. While these approaches have a reduced resource footprint, they do not address data access overheads, and the use of software-based isolation alone does not isolate resources.

其他系统通过远离容器和虚拟机来减少容器资源占用。 Terrarium [28] 和 Cloudflare Workers [22] 分别使用 WebAssembly 和 V8 Isolates 采用基于软件的隔离; Krustlet [54] 使用 WebAssembly 复制容器以确保内存安全,而 SEUSS [16] 演示了无服务器 unikernel。虽然这些方法减少了资源占用,但它们没有解决数据访问开销,并且仅使用基于软件的隔离并不能隔离资源。

We make the observation that serverless computing can better support data-intensive applications with a new isolation abstraction that (i) provides strong memory and resource isolation between functions, yet (ii) supports efficient state sharing. Data should be co-located with functions and accessed directly, minimizing data-shipping. Furthermore, this new isolation abstraction must (iii) allow scaling state across multiple hosts; (iv) has a low memory footprint, permitting many instances on one machine; (v) exhibit fast instantiation times; and (vi) support multiple programming languages to facilitate the porting of existing applications.

我们观察到无服务器计算可以通过新的隔离抽象更好地支持数据密集型应用程序,该抽象(i)在功能之间提供强大的内存和资源隔离,但(ii)支持有效的状态共享。 数据应与功能位于同一位置并直接访问,从而最大限度地减少数据传输。 此外,这种新的隔离抽象必须 (iii) 允许跨多个主机扩展状态; (iv) 内存占用低,允许在一台机器上运行多个实例; (v) 表现出快速的实例化时间; (vi) 支持多种编程语言,以方便现有应用程序的移植。

In this paper, we describe Faaslets, a new lightweight isolation abstraction for data-intensive serverless computing. Faaslets support stateful functions with efficient shared memory access and are executed by our FAASM distributed serverless runtime. Faaslets have the following properties, summarising our contributions:

在本文中,我们描述了 Faaslets,这是一种用于数据密集型无服务器计算的新型轻量级隔离抽象。 Faaslet 支持具有高效共享内存访问的有状态功能,并由我们的 FAASM 分布式无服务器运行时执行。 Faaslets 具有以下属性,总结了我们的贡献:

(1) Faaslets achieve lightweight isolation. Faaslets rely on software fault isolation (SFI) [82], which restricts functions to access their memory. A function associated with a Faaslet, together with its library and language runtime dependencies, is compiled to WebAssembly [35]. The FAASM runtime then executes multiple Faaslets, each with a dedicated thread, within a single address space. For resource isolation, the CPU cycles of each thread are constrained using Linux cgroups [79] and network access is limited using network namespaces [79] and traffic shaping. Many Faaslets can be executed efficiently and safely on a single machine.

(2) Faaslets support efficient local/global state access. Since Faaslets share the same address space, they can access shared memory regions with local states efficiently. This allows the co-location of data and functions and avoids serialization overheads. Faaslets use a two-tier state architecture, a local tier provides in-memory sharing, and a global tier supports distributed access to states across hosts. The FAASM runtime provides a state management API to Faaslets that gives fine-grained control over the state in both tiers. Faaslets also support stateful applications with different consistency requirements between the two tiers.

(3) Faaslets have fast initialization times. To reduce cold-start time when a Faaslet executes for the first time, it is launched from a suspended state. The FAASM run�time pre-initializes a Faaslet ahead of time and snapshots its memory to obtain a Proto-Faaslet, which can be restored in hundreds of microseconds. Proto-Faaslets are used to create fresh Faaslet instances quickly, e.g. avoiding the time to initialize a language runtime. While existing work on snapshots for serverless takes a single-machine approach [1, 16, 25, 61], Proto-Faaslets support cross-host restores and are OS-independent.

(4) Faaslets support a flexible host interface. Faaslets interact with the host environment through a set of POSIX-like calls for networking, file I/O, global state access, and library loading/linking. This allows them to support dynamic language runtimes and facilitates the porting of existing applications, such as CPython by changing fewer than 10 lines of code. The host interface provides just enough virtualization to ensure isolation while adding a negligible overhead.

The FAASM runtime1 uses the LLVM compiler toolchain to translate applications to WebAssembly and supports functions written in a range of programming languages, including C/C++, Python, Typescript, and Javascript. It integrates with existing serverless platforms, and we describe the use with Knative [33], a state-of-the-art platform based on Kubernetes.

To evaluate FAASM’s performance, we consider a number of workloads and compare them to a container-based serverless deployment. When training a machine learning model with SGD [68], we show that FAASM achieves a 60% improvement in run time, a 70% reduction in network transfers, and a 90% reduction in memory usage; for machine learning inference using TensorFlow Lite [78] and MobileNet [37], FAASM achieves over a 200% increase in maximum throughput and a 90% reduction in tail latency. We also show that FAASM executes a distributed linear algebra job for matrix multiplication using Python/Numpy with negligible performance overhead and a 13% reduction in network transfers.

(1) Faaslets实现轻量级隔离。 Faaslets 依赖于软件故障隔离 (SFI) [82],它限制函数访问它们自己的内存。与 Faaslet 关联的函数及其库和语言运行时依赖项被编译为 WebAssembly [35]。 FAASM 运行时然后在单个地址空间内执行多个 Faaslet,每个 Faaslet 都有一个专用线程。对于资源隔离,每个线程的 CPU 周期使用 Linux cgroups [79] 进行限制,并且使用网络命名空间 [79] 和流量整形来限制网络访问。许多 Faaslet 可以在一台机器上高效、安全地执行。

(2) Faaslets 支持高效的本地/全局状态访问。由于 Faaslet 共享相同的地址空间,因此它们可以有效地访问具有本地状态的共享内存区域。这允许数据和函数的共存并避免序列化开销。 Faaslets 使用两层状态架构,本地层提供内存共享,全局层支持跨主机分布式访问状态。 FAASM 运行时为 Faaslets 提供了一个状态管理 API,可以对两个层中的状态进行细粒度控制。 Faaslet 还支持在两层之间具有不同一致性要求的有状态应用程序。

(3) Faaslet 具有快速的初始化时间。为了减少 Faaslet 第一次执行时的冷启动时间,它从挂起状态启动。 FAASM 运行时会提前预初始化 Faaslet 并对其内存进行快照以获得 Proto-Faaslet,该原始 Faaslet 可以在数百微秒内恢复。 Proto-Faaslet 用于快速创建新的 Faaslet 实例,例如避免初始化语言运行时的时间。虽然无服务器快照的现有工作采用单机方法 [1, 16, 25, 61],但 Proto-Faaslets 支持跨主机恢复并且独立于操作系统。

(4) Faaslets 支持灵活的主机接口。 Faaslet 通过一组类似 POSIX 的网络调用、文件 I/O、全局状态访问和库加载/链接与主机环境交互。这使它们能够支持动态语言运行时,并通过更改少于 10 行的代码来促进现有应用程序的移植,例如 CPython。主机接口提供足够的虚拟化以确保隔离,同时增加可忽略不计的开销。

FAASM 运行时 1 使用 LLVM 编译器工具链将应用程序转换为 WebAssembly,并支持使用多种编程语言编写的函数,包括 C/C++、Python、Typescript 和 Javascript。它与现有的无服务器平台集成,我们描述了使用 Knative [33],这是一个基于 Kubernetes 的最先进平台。

为了评估 FAASM 的性能,我们考虑了许多工作负载,并将它们与基于容器的无服务器部署进行了比较。在使用 SGD [68] 训练机器学习模型时,我们表明 FAASM 实现了 60% 的运行时间改进、70% 的网络传输减少以及 90% 的内存使用量减少;对于使用 TensorFlow Lite [78] 和 MobileNet [37] 的机器学习推理,FAASM 的最大吞吐量增加了 200% 以上,尾部延迟减少了 90%。我们还展示了 FAASM 使用 Python/Numpy 执行矩阵乘法的分布式线性代数作业,性能开销可忽略不计,网络传输减少 13%。

2 Isolation vs. Sharing in Serverless

Sharing memory is fundamentally at odds with the goal of isolation, hence providing shared access to in-memory states in a multi-tenant serverless environment is a challenge.

共享内存从根本上与隔离目标不一致,因此在多租户无服务器环境中提供对内存中状态的共享访问是一个挑战。

Table. 1 contrasts containers and VMs with other potential serverless isolation options, namely unikernels [16] in which minimal VM images are used to pack tasks densely on a hypervisor and software-fault isolation (SFI) [82], providing lightweight memory safety through static analysis, instrumentation and runtime traps. The table lists whether each fulfills three key functional requirements: memory safety, resource isolation, and sharing of in-memory state. A fourth requirement is the ability to share a filesystem between functions, which is important for legacy code and to reduce duplication with shared files.

Table 1 将容器和 VM 与其他潜在的无服务器隔离选项进行对比,即 unikernels [16],其中使用最少的 VM 映像在管理程序和软件故障隔离 (SFI) [82] 上密集打包任务,通过静态分析提供轻量级内存安全 、检测和运行时陷阱。 该表列出了每个功能是否满足三个关键功能要求:内存安全、资源隔离和内存状态共享。 第四个要求是能够在函数之间共享文件系统,这对于遗留代码和减少共享文件的重复很重要。

The table also compares these options on a set of nonfunctional requirements: low initialization time for fast elasticity; small memory footprint for scalability and efficiency, and the support for a range of programming languages.

该表还根据一组非功能性要求比较了这些选项:快速弹性的低初始化时间; 可扩展性和效率的小内存占用,以及对一系列编程语言的支持。

Containers offer an acceptable balance of features if one sacrifices efficient state sharing—as such they are used by many serverless platforms [32, 39, 50]. Amazon uses Firecracker [4], a “micro VM” based on KVM with similar properties to containers, e.g. initialization times in the hundreds of milliseconds and memory overheads of megabytes.

如果牺牲了高效的状态共享,容器可以提供可接受的功能平衡——因此,许多无服务器平台都在使用它们 [32, 39, 50]。亚马逊使用 Firecracker [4],这是一种基于 KVM 的“微型 VM”,具有与容器类似的属性,例如数百毫秒的初始化时间和兆字节的内存开销。

Containers and VMs compare poorly to unikernels and SFI on initialization times and memory footprint because of their level of virtualization. They both provide complete virtualized POSIX environments, and VMs also virtualize hardware. Unikernels minimize their levels of virtualization, while SFI provides none. Many unikernel implementations, however, lack the maturity required for production serverless platforms, e.g. missing the required tooling and a way for non-expert users to deploy custom images. SFI alone cannot provide resource isolation, as it purely focuses on memory safety. It also does not define a way to perform isolated interactions with the underlying host. Crucially, as with containers and VMs, neither unikernels nor SFI can share state efficiently, with no way to express shared memory regions between compartments.

由于虚拟化级别,容器和 VM 在初始化时间和内存占用方面与 unikernel 和 SFI 相比较差。它们都提供完整的虚拟化 POSIX 环境,并且 VM 还可以虚拟化硬件。 Unikernel 将虚拟化级别降至最低,而 SFI 则没有。然而,许多 unikernel 实现缺乏生产无服务器平台所需的成熟度,例如缺少必需的工具和非专家用户部署自定义映像的方法。 SFI 本身不能提供资源隔离,因为它纯粹关注内存安全。它也没有定义与底层主机执行隔离交互的方法。至关重要的是,与容器和 VM 一样,unikernel 和 SFI 都不能有效地共享状态,无法在隔间之间表达共享内存区域。

2.1 Improving on Containers

Serverless functions in containers typically share state via external storage and duplicate data across function instances. Data access and serialization introduces network and compute overheads; duplication bloats the memory footprint of containers, already of the order of megabytes [51]. Containers contribute hundreds of milliseconds up to seconds in cold-start latencies [83], incurred on initial requests and when scaling. Existing work has tried to mitigate these drawbacks by recycling containers between functions, introducing static VMs, reducing storage latency, and optimizing initialization.

容器中的无服务器功能通常通过外部存储共享状态并跨功能实例复制数据。数据访问和序列化引入了网络和计算开销;重复使容器的内存占用膨胀,已经达到兆字节 [51]。在初始请求和扩展时,容器在冷启动延迟 [83] 中贡献了数百毫秒到几秒。现有工作试图通过在功能之间回收容器、引入静态虚拟机、减少存储延迟和优化初始化来缓解这些缺点。

Recycling containers avoid initialization overheads and allow data caching but sacrifices isolation and multi-tenancy. PyWren [42] and its descendants, Numpywren [73], IBMPy-wren [69], and Locus [66] use recycled containers, with long-lived AWS Lambda functions that dynamically load and execute Python functions. Crucial [12] takes a similar approach, running multiple functions in the same JVM. SAND [1] and Cloudburst [75] provide only process isolation between functions of the same application and place them in shared long-running containers, with at least one additional background storage process. Using containers for multiple functions and supplementary long-running services requires over-provisioned memory to ensure capacity both for concurrent executions and for peak usage. This is at odds with the idea of fine-grained scaling in serverless.

回收容器避免了初始化开销并允许数据缓存,但牺牲了隔离和多租户。 PyWren [42] 及其后代 Numpywren [73]、IBMPy-wren [69] 和 Locus [66] 使用回收的容器,以及可动态加载和执行 Python 函数的长寿命 AWS Lambda 函数。 Crucial [12] 采用了类似的方法,在同一个 JVM 中运行多个函数。 SAND [1] 和 Cloudburst [75] 仅在同一应用程序的功能之间提供进程隔离,并将它们放置在共享的长时间运行的容器中,至少有一个额外的后台存储进程。将容器用于多种功能和补充的长期运行服务需要超额配置内存,以确保并发执行和峰值使用的容量。这与无服务器中细粒度扩展的想法不一致。

Adding static VMs to handle external storage improves performance but breaks the serverless paradigm. Cirrus [18] uses large VM instances to run a custom storage backend; Shredder [92] uses a single long-running VM for both storage and function execution; ExCamera [30] uses long-running VMs to coordinate a pool of functions. Either the user or provider must scale these VMs to match the elasticity and parallelism of functions, which adds complexity and cost.

添加静态 VM 来处理外部存储可提高性能,但打破了无服务器模式。 Cirrus [18] 使用大型 VM 实例来运行自定义存储后端; Shredder [92] 使用单个长时间运行的 VM 进行存储和功能执行; ExCamera [30] 使用长时间运行的 VM 来协调功能池。用户或提供商必须扩展这些 VM 以匹配功能的弹性和并行性,这会增加复杂性和成本。

Reducing the latency of auto-scaled storage can improve performance within the serverless paradigm. Pocket [43] provides ephemeral serverless storage; other cloud providers offer managed external states, such as AWS Step Functions [3], Azure Durable Functions [53], and IBM Composer [8]. Such approaches, however, do not address the data-shipping problem and its associated network and memory overheads.

减少自动扩展存储的延迟可以提高无服务器范例中的性能。 Pocket [43] 提供短暂的无服务器存储;其他云提供商提供托管的外部状态,例如 AWS Step Functions [3]、Azure Durable Functions [53] 和 IBM Composer [8]。然而,这些方法并没有解决数据传输问题及其相关的网络和内存开销。

Container initialization times have been reduced to mitigate the cold-start problem, which can contribute several seconds of latency with standard containers [36, 72, 83]. SOCK [61] improves the container boot process to achieve cold starts in the low hundreds of milliseconds; Catalyzer [25] and SEUSS [16] demonstrate snapshot and restore in VMs and unikernels to achieve millisecond serverless cold starts. Although such reductions are promising, the resource overhead and restrictions on sharing memory in the underlying mechanisms still remain.

容器初始化时间已减少以缓解冷启动问题,冷启动问题可能会导致标准容器有几秒的延迟 [36, 72, 83]。 SOCK [61] 改进了容器启动过程,实现了几百毫秒的冷启动; Catalyzer [25] 和 SEUSS [16] 演示了 VM 和 unikernel 中的快照和恢复,以实现毫秒级无服务器冷启动。尽管这种减少是有希望的,但底层机制中的资源开销和共享内存的限制仍然存在。

2.2 Potential of Software-based Isolation

Software-based isolation offers memory safety with initialization times and memory overheads up to two orders of magnitude lower than containers and VMs. For this reason, it is an attractive starting point for serverless isolation. However, software-based isolation alone does not support resource isolation or efficient in-memory state sharing.

基于软件的隔离提供内存安全,初始化时间和内存开销比容器和虚拟机低两个数量级。因此,它是无服务器隔离的一个有吸引力的起点。但是,仅基于软件的隔离不支持资源隔离或高效的内存中状态共享。

It has been used in several existing edge and serverless computing systems, but none address these shortcomings. Fastly’s Terrarium [28] and Cloudflare Workers [22] provide memory safety with WebAssembly [35] and V8 Isolates [34], respectively, but neither isolates CPU or network use, and both rely on data shipping for state access; Shredder [92] also uses V8 Isolates to run code on a storage server, but does not address resource isolation, and relies on co-locating state and functions on a single host. This makes it ill-suited to the level of scale required in serverless platforms; Boucher et al. [14] show microsecond initialization times for Rust microservices, but do not address isolation or state sharing; Krustlet [54] is a recent prototype using WebAssembly to replace Docker in Kubernetes, which could be integrated with Knative [33]. It focuses, however, on replicating container-based isolation, and so fails to meet our requirement for in-memory sharing.

它已用于几个现有的边缘和无服务器计算系统,但没有一个解决这些缺点。 Fastly 的 Terrarium [28] 和 Cloudflare Workers [22] 分别通过 WebAssembly [35] 和 V8 Isolates [34] 提供内存安全,但都没有隔离 CPU 或网络使用,并且都依赖数据传输进行状态访问; Shredder [92] 也使用 V8 Isolates 在存储服务器上运行代码,但没有解决资源隔离问题,并且依赖于在单个主机上并置状态和功能。这使得它不适合无服务器平台所需的规模水平;布歇等人。 [14] 显示了 Rust 微服务的微秒初始化时间,但没有解决隔离或状态共享问题; Krustlet [54] 是最近使用 WebAssembly 替换 Kubernetes 中的 Docker 的原型,它可以与 Knative [33] 集成。然而,它侧重于复制基于容器的隔离,因此无法满足我们对内存共享的要求。

Our final non-functional requirement is for multi-language support, which is not met by language-specific approaches to software-based isolation [11, 27]. Portable Native Client [23] provides multi-language software-based isolation by targeting a portable intermediate representation, LLVM IR, and hence meets this requirement. Portable Native Client has now been deprecated, with WebAssembly as its successor [35].

我们的最后一个非功能性需求是多语言支持,这是基于软件的隔离的特定语言方法无法满足的 [11, 27]。 Portable Native Client [23] 通过针对可移植的中间表示 LLVM IR 提供基于多语言软件的隔离,因此满足此要求。 Portable Native Client 现在已被弃用,WebAssembly 作为其继任者 [35]。

WebAssembly offers strong memory safety guarantees by constraining memory access to a single linear byte array, referenced with offsets from zero. This enables efficient bounds checking at both compile- and runtime, with runtime checks backed by traps. These traps (and others for referencing invalid functions) are implemented as part of WebAssembly runtimes [87]. The security guarantees of WebAssembly are well established in existing literature, which covers formal verification [84], taint tracking [31], and dynamic analysis [45]. WebAssembly offers mature support for languages with an LLVM front-end such as C, C++, C#, Go, and Rust [49], while toolchains exist for Typescript [10] and Swift [77]. Java bytecode can also be converted [7], and further language support is possible by compiling language runtimes to WebAssembly, e.g. Python, JavaScript, and Ruby. Although WebAssembly is restricted to a 32-bit address space, 64-bit support is in development.

WebAssembly 通过限制对单个线性字节数组的内存访问来提供强大的内存安全保证,引用从零开始的偏移量。这可以在编译和运行时进行有效的边界检查,运行时检查由陷阱支持。这些陷阱(以及其他用于引用无效函数的陷阱)是作为 WebAssembly 运行时的一部分实现的 [87]。 WebAssembly 的安全保证在现有文献中已经很好地建立,其中包括形式验证 [84]、污点跟踪 [31] 和动态分析 [45]。 WebAssembly 为具有 LLVM 前端的语言(如 C、C++、C#、Go 和 Rust [49])提供成熟的支持,而 Typescript [10] 和 Swift [77] 的工具链存在。 Java 字节码也可以被转换 [7],并且可以通过将语言运行时编译为 WebAssembly 来进一步支持语言,例如Python、JavaScript 和 Ruby。尽管 WebAssembly 仅限于 32 位地址空间,但 64 位支持正在开发中。

The WebAssembly specification does not yet include mechanisms for sharing memory, therefore it alone cannot meet our requirements. There is a proposal to add a form of synchronized shared memory to WebAssembly [85], but it is not well suited to sharing serverless states dynamically due to the required compile-time knowledge of all shared regions. It also lacks an associated programming model and provides only local memory synchronization.

WebAssembly 规范尚未包含共享内存的机制,因此仅靠它无法满足我们的要求。有一种建议将一种形式的同步共享内存添加到 WebAssembly [85],但由于需要所有共享区域的编译时知识,它不太适合动态共享无服务器状态。它还缺乏相关的编程模型,仅提供本地内存同步。

The properties of software-based isolation highlight a compelling alternative to containers, VMs, and unikernels, but none of these approaches meet all of our requirements. We, therefore, propose a new isolation approach to enable efficient serverless computing for big data.

基于软件的隔离的特性突出了容器、VM 和 unikernel 的一个引人注目的替代方案,但这些方法都不能满足我们的所有要求。因此,我们提出了一种新的隔离方法,以实现大数据的高效无服务器计算。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,080评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,422评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,630评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,554评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,662评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,856评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,014评论 3 408
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,752评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,212评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,541评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,687评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,347评论 4 331
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,973评论 3 315
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,777评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,006评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,406评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,576评论 2 349

推荐阅读更多精彩内容