wasamtime is a standalone runtime to bring compiled code to modern web browsers

Bytecode Alliance project with oveer 6k+ stars and a wide developer community

wasmtime

WebAssembly

A Bytecode Alliance project

Guide | Contributing | Website | Chat

Installation

The Wasmtime CLI can be installed on Linux and macOS with a small install script:

$ curl https://wasmtime.dev/install.sh -sSf | bash

Windows or otherwise interested users can download installers and binaries directly from the GitHub Releases page.

Example

If you've got the Rust compiler installed then you can take some Rust source code:

fn main() {
    println!("Hello, world!");
}

and compile/run it with:

$ rustup target add wasm32-wasi
$ rustc hello.rs --target wasm32-wasi
$ wasmtime hello.wasm
Hello, world!

Features

  • Lightweight. Wasmtime is a standalone runtime for WebAssembly that scales with your needs. It fits on tiny chips as well as makes use of huge servers. Wasmtime can be embedded into almost any application too.

  • Fast. Wasmtime is built on the optimizing Cranelift code generator to quickly generate high-quality machine code at runtime.

  • Configurable. Whether you need to precompile your wasm ahead of time, generate code blazingly fast with Lightbeam, or interpret it at runtime, Wasmtime has you covered for all your wasm-executing needs.

  • WASI. Wasmtime supports a rich set of APIs for interacting with the host environment through the WASI standard.

  • Standards Compliant. Wasmtime passes the official WebAssembly test suite, implements the official C API of wasm, and implements future proposals to WebAssembly as well. Wasmtime developers are intimately engaged with the WebAssembly standards process all along the way too.

Language Support

You can use Wasmtime from a variety of different languages through embeddings of the implementation:

  • Rust - the wasmtime crate
  • C - the wasm.h, wasi.h, and wasmtime.h headers
  • [C++] - the wasmtime-cpp repository
  • Python - the wasmtime PyPI package
  • .NET - the Wasmtime NuGet package
  • Go - the wasmtime-go repository

Documentation

📚 Read the Wasmtime guide here! 📚

The wasmtime guide is the best starting point to learn about what Wasmtime can do for you or help answer your questions about Wasmtime. If you're curious in contributing to Wasmtime, it can also help you do that!


It's Wasmtime.

Issues

Collection of the latest Issues

jlkiri

jlkiri

4

If I understand correctly, stdout method on WasiCtxBuilder can be used to set what the instance will treat as stdout. This something needs to implement WasiFile trait and I discovered that File from cap-std crate implements it. It also allows creating cap-std-files from usual std files. I guessed that I can then create a usual file, and then through some conversions turn it into a WasiCtxBuilder-usable stdout target. Since all these methods own the file, I obtain the raw fd which I then use to open the file again and read its contents. However it appears to be empty, although I do print text in the Rust program compiled to wasm. Note that the execution itself is successful - I just don't see any side effects. Inheriting stdio also works properly. Here's the code that I compile to wasm:

and here's what I use to execute it in wasmtime:

Could it be that raw_fd is no longer valid at that point?

cfallin

cfallin

cranelift
1

We currently have some runtests that skip some platforms (x64, aarch64, or s390x) due to unsupported lowerings. In general, we should support the same set of features everywhere. To the extent that we already have runtests for various corners of CLIF, we should look at each runtest and, where a platform is not included, work out why (missing lowerings or something else) and fix it.

(From discussion on #4139.)

cfallin

cfallin

isle
1

While discussing the ISLE compiler and the backend-verification project with @avanhatt and others today, the point arose that we don't actually have functional tests for ISLE in isolation; rather, Cranelift itself is our "testsuite". We do test that the ISLE compiler accepts some "good" inputs, rejects some "bad" inputs, and produces buildable and linkable Rust code for other inputs, but we never run that code with test values.

We should add tests that check various basic aspects of functionality, like rule prioritization, invocation of chains of rules, correct pattern-matching and destructuring, and the like. This will be especially useful if/when we improve the ISLE compiler with more advanced optimizations, like inlining, trie reordering, or the like.

cfallin

cfallin

enhancement
0

We should update our documentation in cranelift/docs at least in the following ways:

  • We should make a pass to ensure that outdated concepts related to the old backend framework, like encodings and recipes, are removed;
  • We should fill out documentation about the new backend framework's architecture, beyond the doc-comments in the source (but likely borrowing from them);
  • We should provide more "purpose-directed guides":
    • How to add a new backend (#4126);
    • How to add a new instruction to CLIF;
    • How to add a new instruction to a machine backend;
    • How to add a new lowering from CLIF to a machine backend;
    • Strategies to debug codegen issues (minimization, diffing output, using "optimization fuel" (#4134)).
cfallin

cfallin

enhancement
0

We should rigorously test the compiler with a randomized testing strategy one might call "chaos mode". The general idea is to introduce perturbations (deterministically derived from some seed to allow bug reproduction) into various decision points within the compiler, where these perturbations are either "worst-case conditions" or just randomness.

The first step is to build the "control plane", or a bit of instrumentation threaded through the compiler to allow centralization of certain decisions:

  • Request fuel to perform some optimization or transform, skipping if no fuel left (allows bisecting for origin of bugs);
  • Request a "random choice" among some options based on the seed, or based on external user input (later).

Then we thread this through the passes, lowering backend, ABI code, and possibly even regalloc2, and use it in interesting ways:

  • Clobber any state at the machine level that is not supposed to be defined:
    • Unused registers (as per regalloc2);
    • Volatile (caller-saved) registers in function prologues;
    • Put magic values in callee-saved registers around calls, and check them;
    • Mangle the upper bits of narrow types in wider registers (u8/16/32 in a 64-bit GPR, for example);
  • Insert randomness into the regalloc's input and its decisions:
    • Add "ghost clobbers" that randomly allocate a tmp and clobber it, or "ghost uses" that random use a value, forcing it into a register;
    • Split liveranges randomly, and add new (extraneous but legal) constraints randomly, causing e.g. spills or new moves;
    • Disable the redundant-move eliminator;
    • Randomize the resolution of parallel moves into sequentialized moves;
  • Randomize and constrain control flow:
    • Randomize basic-block order;
    • Randomize whether or not MachBackend simplifies/collapses branches;
    • Randomly insert branch islands even when not needed;
  • Randomize instruction selection:
    • Mark some ISLE rules as "redundant" (expressing better lowerings, but covered by other rules) and randomly skip them dynamically with a predicate;
    • Randomly "hide" operand subtrees by not returning the source instruction for a Value to the matcher;
    • Randomly apply higher or lower priority to some rules (we will need to work out how to make this dynamic -- perhaps by duplicating a rule at two priorities and selecting with a predicate).

Once we have these mutations and the control plane to drive them, we can run them in fuzzing by taking the seed from the fuzz input. Because the control plane is deterministic, bugs should reproduce as per normal. We should evaluate the effectiveness by looking at fuzz coverage and devise new perturbations as necessary.

This should produce more efficient fuzzing than the current end-to-end compilation testing because it controls degrees of freedom directly: it allows the fuzzer to mutate state internal to the compiler without finding the inverse function for the pipeline in front of that state and indirectly mutating it.

cfallin

cfallin

enhancement
2

In some cases, it is possible to perform better optimizations and instruction selection if one knows that only part of a value is needed (demanded bits). At other times, it is possible to do better if one knows that the operation or instruction actually generating a value sets more bits than are necessary in the destination register (defined-bits).

We should develop these analyses and use them during optimization and during lowering. Some examples:

  • Bitmasks can be simplified or elided when demanded-bits allows: for example, and x, 1 with only the LSB demanded is just x.
  • An extend operator can be elided if the upper (extended-into) bits are not demanded.
  • An extend operator can be elided if the upper bits are already actually defined by the chosen producer instruction(s).

The demanded-bits analysis should be at the CLIF level as it is a machine-independent concept (what bits do the CLIF semantics of the use(s) actually depend on). The defined-bits analysis is fundamentally a lowered-MachInst-level concept, as it has to do (in our case, since no bits within the type are undefined) with bits above a value in a register (e.g., upper 32 for an I32 a 64-bit register).

cfallin

cfallin

enhancement
0

When using explicit bounds checks for "dynamic" heaps, Cranelift generated code currently includes a comparison and conditional branch for every access (as well as the Spectre-mitigation conditional move if enabled).

In some cases, it should be able to prove that subsequent accesses are "superseded" by the first: they occur at the same or earlier addresses. In such cases, we can only reach the second access if the first bounds-check succeeded (or else we would have trapped), and heaps can never shrink, so the second is unnecessary.

We should develop such an analysis initially with a simple "same index SSA-value" test, and later enhance it with a value range analysis.

cfallin

cfallin

enhancement
0

We should build a simple alias analysis over CLIF to determine when loads and stores are referring to overlapping (may-alias) or the same (must-alias) data, and allow optimizations as appropriate (redundant-load elimination, store-to-load forwarding, dead-store elimination, code motion in general in LICM or lowering).

It is likely we will want to build a bitset-style analysis, where we have a small, finite set of disjoint divisions of the heap, and every load or store is marked as potentially accessing one or more.

For example, one could imagine a set of four bits for a Wasm frontend: accesses to any heap, accesses to any table, accesses to any global, or everything else (and explicitly not one of the above).

Given these bits for every load/store, we then have a "vector color" of sorts, in the sense that the current instruction coloring scheme could be seen to have a color per disjoint segment of world-state. We don't actually necessarily want to compute these vectors of colors, but we want to be able to answer questions using them.

cfallin

cfallin

enhancement
0

As part of #4128, we will need a way to write ISLE rules that replace CLIF values with new CLIF values, insert new CLIF instructions, and delete old CLIF instructions. It might also make sense to allow replacement of single instructions with bounded (single-in, single-out) control-flow shapes, as in Hoopl.

This work will consist at least of:

  • Developing a driver framework, similar to but distinct from the CLIF-to-MachInst lowering driver, for both forward and backward transform passes to traverse the input;
  • Making the generated CLIF-instruction extractor prelude compatible with this framework as well;
  • Generating CLIF-instruction constructors;
  • Developing an idiomatic way of writing a toplevel constructor entry-point that replaces one input value/instruction with some output.
cfallin

cfallin

enhancement
2

As part of #4128, we should develop a lazy analysis framework that permits us to easily compute properties of CLIF values, operators, or program points between operators (depending on the analysis). This framework should accept pluggable lattice types, lattice meet functions, and transfer functions, ideally permitting these to be written in ISLE.

We should look into the following ideas:

  • Perhaps it is possible to avoid an iterative dataflow-analysis fixpoint by not supporting cyclic analysis at all. Many analyses can't say something meaningful about cycles anyway, at least when the cycle exists due to a non-redundant blockparam (i.e., actually multiple reaching definitions). If we stop at cycles (and immediately shortcut to the lattice "bottom" value), we can lazily go up the DAG of operators, memoizing analysis results per Value as we come back down.

    • Note that supporting fixpoint analysis of forward-dataflow properties is difficult with CLIF in any case because we don't have use-lists per Value, so we can't propagate forward unless we build that information on the side.
  • Perhaps it is possible to co-mingle the analysis with mutations performed by a mid-end optimization framework by either reasoning about invalidations, or else constraining rewrites in a pass of the appropriate direction (e.g., an analysis that "looks upward" and stops at cycles is always safe to lazily complete while doing a downward pass, as long as one sees defs before uses).

cfallin

cfallin

enhancement
2

As part of a general push to improve the quality of generated code, we want to extend the set of optimizations that we perform on the IR (CLIF) beyond the current set of GVN, LICM, DCE, etc passes, and develop some general analyses to support these and the CLIF-to-MachInst lowering pass. For example, we may want:

  • Alias analysis, and redundant load elimination, store-to-load forwarding, and dead store elimination using its results;
  • More general constant propagation and folding (compile-time evaluation) using the CLIF interpreter;
  • Bounds-check elimination, finding when one check dominates another and makes it unnecessary;
  • Integer range analysis, to help with bounds-check elimination;
  • Demanded-bits and defined-bits analyses, either over the CLIF or in concert with the lowering (since at least defined-bits may depend on the instructions chosen);
  • and others.

We should generally strive to write these analyses and transforms with the pass-specific bits in ISLE. This will bring both short- and long-term benefits:

  • Pattern-matching is a very good fit for the sorts of transfer/meet functions that many analyses require, allowing for concise and less error-prone expression of the ideas;
  • The optimizations that allow for efficient merging of many different matching rules during lowering would also bring benefits to any complex transfer function in an analysis;
  • Building passes out of ISLE rules that analyze CLIF and transform CLIF to CLIF lets us eventually fuse this mid-end with the CLIF-to-MachInst lowering pass;
  • Putting everything into the DSL in which we express our backend lowering rewrites allows us to reuse whatever formal-verification machinery we build around it.

The main steps of this work will be:

  • Develop a lazy analysis framework over CLIF, with transfer/meet functions in ISLE;
    • #4129
  • Develop a toplevel generic transform pass driver that edits CLIF in-place, using logic written in ISLE with the same CLIF extractors and constructors as for backends;
    • #4130
  • Co-develop, with the above, several initial passes (e.g. alias analysis and redundant-load elimination).
    • #4131

Possibly, if we can, it may be interesting to also:

  • Find ways to fuse analyses and passes in the mid-end, possibly enabling a lightweight combination of mid-end passes that stream over CLIF once and emit new CLIF, rather than editing in place;
  • Investigate whether we can fuse the above with the backend lowering.

Theese last two steps are intentionally vague and bring up questions of pass direction and pass ordering; but I suspect that we may be able to do something to get down to a handful of passes if we are careful. In any case, finding a way to make this work would be a bonus, and the main benefit provided by the work in this issue overall is ease of development, better likelihood of correctness, and compatibility with verification efforts.

cfallin

cfallin

enhancement
3

In the 2022 roadmap, we described the need to add inlining to Cranelift. This need comes mainly from anticipated future workloads in some Cranelift applications. For example, when used as a Wasm backend, multi-module use-cases will become more common as the component model becomes a reality. In such use-cases, no inlining would have been done by the Wasm toolchain; execution in Wasmtime/Cranelift is the first time that modules "meet" and can cross-inline calls, or at least interface-type adapter shim functions.

We will likely want to be a little careful about the inlining heuristic and associated costs, in order to preserve our general "fast compilation" focus. Perhaps we want to only inline explicitly-marked-as-inlinable functions. Or perhaps a bottom-up-over-callgraph-SCCs approach (like LLVM) is still the right one, but with a low cutoff threshold for function size to inline.

cfallin

cfallin

enhancement
0

Currently, adding a new backend to Cranelift entails at least the following:

  • Defining an "assembler library" centered on a new MachInst type, with all the tedious and error-prone work around that (see #4125);
  • Defining the register set;
  • Defining glue constructors in ISLE to generate these instructions;
  • Defining lowering rules in ISLE;
  • Implementing top-level driver logic for the ISLE backend;
  • Implementing a bunch of miscellaneous traits and types, such as the LabelUse relocation framework;
  • Implementing an ABI binding;
  • Implementing unwind info and debuginfo specifics for this platform;
  • implementing whatever is necessary in the Cranelift embedder (e.g. for wasmtime, at least the fiber support, trap handling details, and object-file details like relocations).

While some of this is unavoidable, we should strive as much as possible to factor out the commonalities, and centralize things otherwise, and write documentation walking through the whole process.

In particular:

  • Solving #4125 would allow the backend author to follow a declarative approach to sketch the instruction format, then "chase the type errors" to fill out the emission details, without too much fear of mistakes in the glue;
  • Generating ISLE constructors for the machine instructions automatically would help a lot;
  • Providing "default implementations" for a lot of the MachBackend trait, and factoring out the rest into smaller traits, would eliminate a lot of the duplication that currently exists.
cfallin

cfallin

enhancement
1

We currently define an enum to represent machine instructions for each backend. This is fairly high-level and generally nice, allowing us to use Rust types to model arguments and modes, etc.

However, there is a lot of "glue" in the layer of code that supports doing things with these enums that is very tedious and error-prone to write:

  • Each instruction enum has a get_operands that must list every Reg as an early/late def/use/mod or tmp (and missing one is a bug);
  • Each instruction enum has an emit that must use an AllocationConsumer to take regalloc results in the same order as get_operands provided vregs, and mismatching the two is a subtle and hard-to-catch bug;
  • Each instruction enum has a pretty-printing implementation, with similar AllocationConsumer usage constraints, and also with very repetitive code ("print the operand and these three regs");
  • There are a number of other kinds of metadata (is a move, is a safepoint) that are fairly mechanical matches over the enum;
  • Possibly others (but the above are the main error-prone bits).

Ideally, we could allow the backend author to define a new "assembler library" by:

  • Writing the enum of instruction formats, with decorations/annotations for "reg use/def" and other metadata;
  • Providing an implementation of a generated trait for emission with one method per instruction format (enum arm), with typesafe named arguments for registers (with generated code matching up allocations, avoiding any errors);
  • Either specifying a "default" for pretty-printing (opcode, Display trait for other bits, registers?) or else using a similar trait approach as for emission.

Then we could generate the rest: essentially the whole implementation of the MachInst trait, or at least most of it.

There has been some informal discussion about whether expanded information about instructions should be in toml/yaml-type files or something else; I believe it might actually be better to keep this in the ISLE, with some sort of annotation syntax, both to reduce the cognitive load of languages that must be learnt, and to keep as few sources-of-truth around as possible.

cfallin

cfallin

enhancement
0

Currently, in both the x64 and aarch64 backends, we generate flags "locally" whenever needed. So, for example, the sequence

would generate the compare for v0 twice.

This is generally a simplification over the old iflags approach, which we prefer for some of the reasons described in #3249. In particular, iflags-typed values are "weird" (cannot be stored or loaded, only one can be live at a time, cannot be directly observed) and these restrictions complicate other analyses/transforms. The tradeoff of effectively regenerating them on each use has been reasonable so far.

However, locally in cases where we do use the results of a compare more than once, we should be able to share a single compare operation. We might be able to reason about this by building a forward pass that tracks the last generated flags and using this information from within the backend's pattern-matching. For extra credit, we might be able to factor this information into the "unique use" framework, allowing a compare with multiple uses but only one codegen'd occurrence to merge load operations directly on x64.

cfallin

cfallin

enhancement
0

The code generation for (i) branches on booleans, and (ii) branches on integer values that come from compares but are not directly observable (e.g. in a different basic block), is suboptimal. We often see:

  • Masking, like AND reg, 1, because we pessimistically assume that the upper bits of a boolean value are undefined;
  • cmp, setcc (x64) / cset (aarch64) into a register followed by a conditional branch sometime later;
  • A combination of the above two.

The root causes are:

  • We do not pattern-match far enough back, in some cases, to fuse the brz and icmp at the Cranelift level, and this is exacerbated by GVN and LICM that hoist icmps earlier in the function;
  • We do not have combination patterns that recognize when some producers of bools-as-integers will actually define the high bits.

Some combination of more aggressive pattern matching and demanded-bits analysis could improve the codegen in these cases.

pepyakin

pepyakin

6

wasmtime right now has the fuel mechanism. It allows precise control of how many instructions are executed before suspending execution, at least at a basic block granularity. The price is a rather significant performance hit.

I believe we can do better in case the user only wants to know if the execution exceeded some fuel limit. The cost to pay is that condition is detected with a delay, hence slacked. Therefore, this works similarly to the fuel metering mechanism only if there are no side effects when the execution reaches its deadline. For instance, all changes performed are rolled back if a DB transaction exceeds its fuel allowance.

The idea is to take the existing fuel mechanism, remove the compare & conditional jumps, and check the out-of-fuel condition asynchronously.

The tricky part is the asynchronous fuel variable checking and suspending the execution.

We could leverage HotSpot-style safepoints to implement this. Such a safepoint could be implemented just as access to a page. Suspending the execution could be triggered by mprotect-ing the page. Similarly to epoch interruption, we could place safepoints when entering a function and loop backedges. We would need to extract the fuel counter while handling the page fault signal. We assume that the codegen thoughtfully left us a mapping where the fuel counter is saved: either a register or an offset for a spilled variable. Having recovered the fuel counter, we'd check if it exceeded the limit and yield if so.

I realized that this is not trivial to benchmark. I tried removing fuel_check, but that would lead to DCE of the now unused fuel value. I would appreciate suggestions on how to benchmark that properly.

  • Does this sound sensible? Am I missing something obvious?
  • Is this something that could be considered to be accepted upstream?
YannikSc

YannikSc

bug
4

Note: I'm not sure if my implementation is correct, so please correct me if my implementation or expectations are wrong.

Test Case

Example project is available here

Steps to Reproduce

  • Create a wasmtime project with wasi support (wasmtime_wasi)
  • Build the WasiCtx with inherit_env
  • Run a function a couple times
  • Leak memory

Expected Results

It might be a user issue, as I'm not sure if my implementation is correct, but I would assume, that it is.

So in this case my expectation would be, that the memory at the end is likely the same as the one at the start.

Actual Results

The memory size at the end is (for 2000 function calls in my example) ~10 time bigger than at the beginning

Versions and Environment

Wasmtime & Wasmtime_wasi version: 0.36.0

uname -srm: Linux 5.17.5-zen1-1-zen x86_64

edit: This also happens if the env method is used

avanhatt

avanhatt

enhancement
0

Feature:

I'm working on a PR to add a flag for optionally not expand internal constructors.

As discussed in an ISLE-verification-focused meeting earlier this week, we'd like to be able to add semantic annotations to terms that are defined with internal extractors. That is, if we have a term like:

We would like to be able to refer to isub in annotations. Currently, this is not possible, because the sema::Pattern representation already has the internal extractor expanded in place (like a macro).

Benefit

(a) This will enable easier downstream verification. (b) From our discussion, ISLE may want to use this option in the codegen phase as well in the future in some circumstances.

Implementation

I have a WIP PR that adds this flag to the TermEnv and uses it in translate_pattern. When the flag is set to false, we can treat the internal extractor like an external one instead of expanding it.

In addition to sema's compile, I'm adding an envs_for_analysis that sets this flag to false and returns the type and term environments. I added a unit test for this but need to improve it (this is the WIP part).

Alternatives

We could now allow annotations on internal extractors, however, this introduces a lot of complexity around the underlying, large enums for CLIF and MInst opcodes.

For example, the actual CLIF isub expands like this, where the inst_data expression is harder to translate to something like SMT:

shamatar

shamatar

enhancement
18

Feature

Implement an optimization pass that would eliminate the __multi3 function from WASM binary during JIT by replacing it with ISA specific (mainly for x86_64 and arm64) sequences, and then inline such sequences into callsites that would allow further optimizations

Benefit

A lot of code dealing with cryptography would benefit form faster full width u64 multiplications where such __multi3 arises

Implementation

If someone would give a few hints about where to start I'd try to implement it by myself

Alternatives

Not that I'm aware of. Patching into calling come native library function is a huge overhead for modern CPUs (4 cycles for x86_64 for e.g. mulx or mul), and while it would be faster most likely, it's still far from optimal case on a hot path

As an example a simple multiply-add-carry function like a*b + c + carry -> (high, low) that accumulates into u128 without overflows compiles down to the listing below, and it can be a good test subject (transformed from wasm into wat, may be not the best readable)

cfallin

cfallin

enhancement
6

In the VCode container, we've stumbled into a fairly handy data structure design idiom, like so:

which lets us aggregate allocation overhead into fewer, larger Vecs rather than a bunch of little ones when we have a conceptually-2D (or N-D) array. This pattern can be extended further, for example for outgoing block args, which are 3D (per block, per successor, we have a list): we do this with a rangelist that gives a range in another rangelist that gives ranges in a pooled sequence.

We manage this manually, and even combine the pooled Vec when we can for more goodness. It would be nice to make this less error-prone by wrapping it up in a typesafe wrapper:

  • A core struct that owns the pool (the operands above);
  • A means of composing types to add one or more range-lists on top of that pool;
  • A handle type that temporarily takes ownership, allows appending, and adds a rangelist entry when done, to enforce that sequences must be contiguous in the pooled sequence.

Doing this while allowing pool-sharing will take a little thought, but if we can find a way to do it, it could allow us to reduce allocations further.

cfallin

cfallin

enhancement
0

The initial switchover to regalloc2 retains the mechanism by which calls signal clobbered physical registers, namely, via phantom defs (defs that are immediately dead and are constrained to those regs). regalloc2 has an explicit mechanism for clobbers which is a bit more precise and possibly ever-so-slightly faster (skips liveness analysis); we should use that instead. No functional correctness implications either way.

Versions

Find the latest versions by id

v0.36.0 - Apr 20, 2022

v0.35.3 - Apr 11, 2022

v0.35.2 - Mar 31, 2022

v0.34.2 - Mar 31, 2022

v0.35.1 - Mar 09, 2022

v0.35.0 - Mar 07, 2022

v0.34.1 - Feb 16, 2022

v0.33.1 - Feb 16, 2022

v0.34.0 - Feb 08, 2022

v0.33.0 - Jan 05, 2022

v0.32.1 - Jan 04, 2022

v0.32.0 - Dec 13, 2021

v0.31.0 - Oct 29, 2021

v0.30.0 - Sep 17, 2021

v0.29.0 - Aug 02, 2021

v0.28.0 - Jun 09, 2021

v0.27.0 - May 21, 2021

v0.26.1 - May 21, 2021

v0.26.0 - Apr 05, 2021

v0.25.0 - Mar 16, 2021

v0.24.0 - Mar 05, 2021

v0.23.0 - Feb 18, 2021

v0.22.1 - Jan 19, 2021

v0.22.0 - Jan 08, 2021

v0.21.0 - Nov 05, 2020

v0.20.0 - Sep 30, 2020

v0.19.0 - Jul 16, 2020

v0.18.0 - Jun 25, 2020

v0.16.0 - May 14, 2020

Information - Updated May 13, 2022

Stars: 7.5K
Forks: 682
Issues: 396

maomi: A rust wasm framework for building pages with components

maomi is a MVVM-like framework for web development

maomi: A rust wasm framework for building pages with components

Rust / Wasm client web app framework

Pull requests which improve test coverage are also very welcome

Rust / Wasm client web app framework

A Rust/WASM Library to interact with Bitcoin SV

npm i bsv-wasm-bundler --save

A Rust/WASM Library to interact with Bitcoin SV

WASM / Rust / D3 example

Fetch data with Rust + WASM and show it with JS + D3

WASM / Rust / D3 example

@texhno-rust-wasm-game-of-life

A template for kick starting a Rust and WebAssembly project using Tutorial

@texhno-rust-wasm-game-of-life

rust wasm worker hello world

Built using the template at which

rust wasm worker hello world

👷‍♀️🦀🕸️ rustwasm-worker-template

A template for kick starting a Cloudflare worker project using

👷‍♀️🦀🕸️ rustwasm-worker-template

Build tool for deploying Rust WASM repositories to Screeps game servers

Build tool for deploying Rust WASM repositories to cargo-web, adding the ability to trim node

Build tool for deploying Rust WASM repositories to Screeps game servers

Rust WASM Web Worker Examples

This repository contains four different examples of using web workers in conjunction with WASM in

Rust WASM Web Worker Examples
Facebook Instagram Twitter GitHub Dribbble
Privacy