Benchmarks

Here are some benchmarks against similar scripting languages. Cyber is fast and efficient with memory. Although it's currently a non-JIT interpreter, Cyber can still beat JITs like node in some benchmarks (however, the startup time does not include JIT warmup). Java is also included in some benchmarks for fun. Scroll down to Performance to learn how Cyber's VM is fast.

Showing total time (orange), startup time (gray), and peak memory usage. Startup times <2ms are not shown. Bench times were recorded using hyperfine on an average Linux x64 computer. Versions used in benchmarks: Node 18.12.1, quickjs 2021-03-27 src, Python 3.11.1 src, luajit 2.1.0-beta3 src, lua 5.4.4, ruby 3.1.3 src, PHP 8.2.0, Perl 5.34, wren 0.4 src, Oracle Java 19.0.1, wasm3 0.5.0 src
Bench Test: Fibers Start/Resume source
This tests spawning fibers and context switching. Cyber does well since it's execution context requires very few save/restore ops. Some languages like ruby and php are left out since they allocate large initial stack memory for fibers.
cyber
25ms
 
21.0mB
wren
59ms
 
34.1mB
luajit(jit)
73ms
 
51.0mB
node(jit)
119ms
54ms
75.3mB
node
146ms
60ms
71.2mB
quickjs
175ms
 
33.8mB
lua
178ms
 
113.3mB
python3
202ms
13ms
28.8mB
Bench Test: Recursive Fibonacci source
This tests how fast function calls are with a growing call stack. Cyber has an efficient call stack and uses gradual typing to speed up the integer ops. Note that some JIT compilers can unroll the function calls and possibly eliminate an inner recursive call entirely.
cyber
39ms
 
2.9mB
java(jit)
44ms
36ms
35.3mB
wasm3
52ms
 
2.5mB
node(jit)
73ms
54ms
41.8mB
php(jit)
77ms
16ms
17.5mB
lua
82ms
 
2.2mB
java
101ms
34ms
32.8mB
quickjs
136ms
 
3.0mB
wren
149ms
 
2.5mB
ruby
177ms
75ms
22.3mB
node
183ms
60ms
41.0mB
python3
189ms
13ms
8.6mB
Bench Test: For Range/Iterator source
This tests basic iterations with counters and also iterable objects. Cyber has specialized bytecode to make for loops run faster.
cyber
41ms
 
15.3mB
lua
72ms
 
18.4mB
php(jit)
74ms
16ms
34.9mB
java(jit)
89ms
36ms
66.8mB
wren
103ms
 
10.2mB
perl
134ms
 
37.8mB
node(jit)
141ms
54ms
106.8mB
ruby
188ms
75ms
30.3mB
node
223ms
60ms
103.4mB
quickjs
226ms
 
18.5mB
python3
239ms
13ms
48.5mB
Bench Test: Max-heap Insert/Pop source
The max-heap was implemented using nodes instead of an array to test executing object ops. Cyber uses inline caching and compile time symbols to speed up object ops.
luajit(jit)
41ms
 
6.4mB
cyber
58ms
 
4.9mB
java(jit)
74ms
36ms
41.4mB
java
93ms
34ms
33.4mB
luajit
96ms
 
5.5mB
node(jit)
98ms
54ms
49.8mB
python3
161ms
13ms
12.4mB
lua
185ms
 
5.7mB
node
223ms
60ms
43.3mB
wren
224ms
 
3.8mB
quickjs
252ms
 
6.2mB

Performance

There are various reasons that make Cyber fast and some of them will be highlighted below. In general, Cyber is fast due to design decisions at the language level as well as the VM implementation.

Crafty register VM.

Cyber runs on a register based VM so most bytecode instructions have a destination address. This greatly reduces the amount of cpu cycles and memory accesses compared to a stack VM. Unlike physical registers, Cyber can have as many virtual registers as the operands allow. This means it doesn't need to do register spills which can be costly.

In fact, the registers live on the stack itself. With this design, function calls don't need to copy values back and forth, and fibers can be switched by just replacing the current stack pointer. All the user defined variables and temp locals are assigned a dedicated register at compile time. This reduces the amount of instructions and makes the bytecode efficient.

Efficient call convention.

Function arguments and return slots are assigned to virtual registers. This allows instructions to operate on them directly without push/popping from the stack. The compiler also arranges the return register to feed directly as an argument to a parent function call. This makes composing functions fast which is suitable for declarative programming paradigms.

In many dynamic languages, functions and fields are looked up in a hash map. In Cyber, they are indexed in an array by a symbol id which is much faster. This is possible because in Cyber there is a distinction between function values and statically declared functions.

Inline caching.

Cyber optimizes instructions by patching bytecode at runtime. Object operations tend to involve more lookups and checks since values can have a dynamic type. By caching the lookup process in the bytecode, it gets the result quicker the next time the instruction is run. In the rare case of a cache miss, the deoptimized version is still quite fast since it uses an MRU table for object types and symbols.

Compact values and heap objects.

In Cyber, all values are 8 bytes and use NaN tagging to represent primitive types or heap objects. Having a compact value representation simplifies the data structures used in the VM. It's also easier to align them in memory to leverage the cpu cache.

Heap objects are allocated from object pools and can represent common data types such as strings, maps, lists, fibers, closures, and small user defined objects. Using these heap objects is faster since the VM can allocate and free them with very little bookkeeping. Cyber uses mimalloc to allocate heap memory which has proven to be fast and reliable.

Compiled using Zig/LLVM.

Cyber itself was written in Zig, a system programming language that makes writing performant software easier. Zig leverages modern compiler features from LLVM which produces fast machine instructions for most targets. For example, Cyber relies on Zig's while loop and switch statement combo to translate into computed gotos. This makes the VM's hot loop fast since each bytecode instruction can branch directly to the code that executes the next instruction. This is much faster than an unoptimized switch statement.

Looking forward.

There's still quite some work to make Cyber more stable as a language. Cyber plans to have gradual typing which will further improve the performance of the VM. Knowing the types at compile time is helpful to avoid unnecessary retain/release ops since Cyber uses ARC for memory management. Multithread support is also something to look forward to.

If you like Cyber, please consider supporting the project via Github Sponsors or Patreon!