I wrote a benchmark that calculates the sum of the first 10000 primes and compared Rust to JavaScript. JavaScript on NodeJS is the fastest among Rust, Scala, and Java. Even though the programs intentionally use a functional style for testing primality aiming to show the advantages of Rust's zero-cost abstraction, NodeJS beats them all.
How can NodeJS, a dynamic typing runtime, be so fast?
Rust code
fn sum_primes(n: usize) -> u64 {
let mut primes = Vec::new();
let mut current: u64 = 2;
let mut sum: u64 = 0;
while primes.len() < n {
if primes.iter().all(|p| current % p != 0) {
sum += current;
primes.push(current);
}
current += 1;
}
sum
}
JavaScript code
function sumPrimes(n) {
let primes = [];
let current = 2;
let sum = 0;
while (primes.length < n) {
if (primes.every(p => current % p != 0)) {
sum += current;
primes.push(current);
}
++current;
}
return sum;
}
The full benchmark can be found on GitHub.
I wrote a benchmark that calculates the sum of the first 10000 primes and compared Rust to JavaScript. JavaScript on NodeJS is the fastest among Rust, Scala, and Java. Even though the programs intentionally use a functional style for testing primality aiming to show the advantages of Rust's zero-cost abstraction, NodeJS beats them all.
How can NodeJS, a dynamic typing runtime, be so fast?
Rust code
fn sum_primes(n: usize) -> u64 {
let mut primes = Vec::new();
let mut current: u64 = 2;
let mut sum: u64 = 0;
while primes.len() < n {
if primes.iter().all(|p| current % p != 0) {
sum += current;
primes.push(current);
}
current += 1;
}
sum
}
JavaScript code
function sumPrimes(n) {
let primes = [];
let current = 2;
let sum = 0;
while (primes.length < n) {
if (primes.every(p => current % p != 0)) {
sum += current;
primes.push(current);
}
++current;
}
return sum;
}
The full benchmark can be found on GitHub.
Share Improve this question edited Jun 29, 2019 at 3:01 Jason Lee asked Feb 22, 2019 at 14:04 Jason LeeJason Lee 5981 gold badge7 silver badges19 bronze badges 6 | Show 1 more comment2 Answers
Reset to default 25The answer can't be simple because V8 does a lot of transformations, but here's a major point:
Node's optimizing compiler dynamically adapts the types it uses (especially for array elements). It's able to use one word integers when they fit (and deoptimizes the function when it receives a non fitting value).
If I take your functions as they are, the Rust one takes 1.28ms to compute sum_prime(500)
when Node takes only 1.04ms (after some warming). If I change the u64
to u32
in the Rust code, then it only takes 608µs.
The JavaScript code I used:
function sum_primes(n) {
var primes = [];
var current = 2;
var sum = 0;
while (primes.length < n) {
if (primes.every(function (p) { return current % p != 0; })) {
sum += current;
primes.push(current);
}
++current;
}
return sum;
}
console.log(sum_primes(200));
// some warming
for (let i=0; i<100; i++) sum_primes(100);
console.time("primes");
console.log(sum_primes(500));
console.timeEnd("primes");
This JavaScript code is faster than your Rust code, but slower than this one:
use std::time::Instant;
fn sum_primes(n: usize) -> u32 {
let mut primes = Vec::new();
let mut current: u32 = 2;
let mut sum: u32 = 0;
while primes.len() < n {
if primes.iter().all(|p| current % p != 0) {
sum += current;
primes.push(current);
}
current += 1;
}
sum
}
fn main() {
println!("{}", sum_primes(200));
let s = Instant::now();
println!("{}", sum_primes(500));
println!("duration: {:?}", s.elapsed());
}
I think your benchmark is somewhat flawed in that a sufficiently advanced compiler could just optimise sum_primes(10000)
into 496165411
, even at compile-time (i.e. Prepack, Closure). It is also possible to memoise the result after the first call at run-time, and probably this is what V8 does (although I would expect HotSpot to do the same).
Use a value that is not known at compile time instead of 10000
, for example a command line argument.
div
instruction, with the same input data. (Unlike pretty much all other ALU operations, div performance is data-dependent, but also slower best-case for 64-bit operand-size on Intel.) – Peter Cordes Commented Feb 22, 2019 at 19:59