Benchmarks
Here are some benchmarks against similar scripting languages. Cyber is fast and efficient with memory. Although it's currently a non-JIT interpreter, Cyber can still beat JITs like node in some benchmarks (however, the startup time does not include JIT warmup). Java is also included in some benchmarks for fun. Scroll down to Performance to learn how Cyber's VM is fast.
Performance
There are various reasons that make Cyber fast and some of them will be highlighted below. In general, Cyber is fast due to design decisions at the language level as well as the VM implementation.
Crafty register VM.
Cyber runs on a register based VM so most bytecode instructions have a destination address. This greatly reduces the amount of cpu cycles and memory accesses compared to a stack VM. Unlike physical registers, Cyber can have as many virtual registers as the operands allow. This means it doesn't need to do register spills which can be costly.
In fact, the registers live on the stack itself. With this design, function calls don't need to copy values back and forth, and fibers can be switched by just replacing the current stack pointer. All the user defined variables and temp locals are assigned a dedicated register at compile time. This reduces the amount of instructions and makes the bytecode efficient.
Efficient call convention.
Function arguments and return slots are assigned to virtual registers. This allows instructions to operate on them directly without push/popping from the stack. The compiler also arranges the return register to feed directly as an argument to a parent function call. This makes composing functions fast which is suitable for declarative programming paradigms.
In many dynamic languages, functions and fields are looked up in a hash map. In Cyber, they are indexed in an array by a symbol id which is much faster. This is possible because in Cyber there is a distinction between function values and statically declared functions.
Inline caching.
Cyber optimizes instructions by patching bytecode at runtime. Object operations tend to involve more lookups and checks since values can have a dynamic type. By caching the lookup process in the bytecode, it gets the result quicker the next time the instruction is run. In the rare case of a cache miss, the deoptimized version is still quite fast since it uses an MRU table for object types and symbols.
Compact values and heap objects.
In Cyber, all values are 8 bytes and use NaN tagging to represent primitive types or heap objects. Having a compact value representation simplifies the data structures used in the VM. It's also easier to align them in memory to leverage the cpu cache.
Heap objects are allocated from object pools and can represent common data types such as strings, maps, lists, fibers, closures, and small user defined objects. Using these heap objects is faster since the VM can allocate and free them with very little bookkeeping. Cyber uses mimalloc to allocate heap memory which has proven to be fast and reliable.
Compiled using Zig/LLVM.
Cyber itself was written in Zig, a system programming language that makes writing performant software easier. Zig leverages modern compiler features from LLVM which produces fast machine instructions for most targets. For example, Cyber relies on Zig's while loop and switch statement combo to translate into computed gotos. This makes the VM's hot loop fast since each bytecode instruction can branch directly to the code that executes the next instruction. This is much faster than an unoptimized switch statement.
Looking forward.
There's still quite some work to make Cyber more stable as a language. Cyber plans to have gradual typing which will further improve the performance of the VM. Knowing the types at compile time is helpful to avoid unnecessary retain/release ops since Cyber uses ARC for memory management. Multithread support is also something to look forward to.
If you like Cyber, please consider supporting the project via Github Sponsors or Patreon!