1024programmer Java A small chestnut comparing the performance of golang, php, Node.js, Python and Rust

A small chestnut comparing the performance of golang, php, Node.js, Python and Rust

After seeing the comparison by netizen toozyxia (https://my.oschina.net/xiayongsheng/blog/4775399), I suddenly had the idea to add the comparison of Node.js

golang:

 package main import ( "fmt" "os" "reflect" "runtime/pprof" "runtime/trace" "strconv" "time" "unsafe" ) type CalcCollection struct {  } func (c *CalcCollection) V0() { arr := map[string]int64{} for i := int64(0); i <1000000; i++ { value := time.Now().Unix() key :  = strconv.FormatInt(i, 10) + "_" + strconv.FormatInt(value, 10) arr[key] = value } } func (c *CalcCollection) V1() { nums := int64(1000000) arr :=  make(map[string]int64, nums) // key, put it outside the loop, you can reuse key := make([]byte, 0) for i := int64(0); i <nums; i++ { key = key  [:0] value := time.Now().Unix() // Use appendInt instead to remove the overhead of converting []byte to string inside strconv key = strconv.AppendInt(key, i, 10) key = append(key,  '_') key = strconv.AppendInt(key, value, 10) keyStr := string(key) arr[keyStr] = value } } func (c *CalcCollection) V2() { nums := int64(1000000) arr :  = make(map[string]int64, nums) // Calculate the key length and apply to save []byte of all keys keyLen := int64(len(strconv.FormatInt(nums, 10)) + 1 + 10) totalLen :=  keyLen * nums key := make([]byte, totalLen) for i := int64(0); i <nums; i++ { value := time.Now().Unix() // Calculate the position pos of the current loop key  := i * keyLen b := key[pos:pos] b = strconv.AppendInt(b, i, 10) b = append(b, '_') b = strconv.AppendInt(b, value, 10) //  Directly convert []byte to string arr[*(*string)(unsafe.Pointer(&b))] = value } } // Please ignore the following code, the function is as follows: // Determine to call CalcCollection.V{ver} based on the parameters  //Determine whether to record trace and profile based on parameters func main() { ver := os.Args[len(os.Args)-2] isRecord := os.Args[len(os.Args)-1] == "  t" calcReflect := reflect.ValueOf(&CalcCollection{}) methodName := "V" + ver m := calcReflect.MethodByName(methodName) if isRecord { traceFile, err := os.Create(methodName + "_trace.out")  if err != nil { panic(err.Error()) } err = trace.Start(traceFile) if err != nil { panic("start trace fail :" + err.Error()) } defer trace.Stop(  ) cpuFile, err := os.Create(methodName + "_cpu.out") if err != nil { panic(err.Error()) } defer cpuFile.Close() err = pprof.StartCPUProfile(cpuFile) if err !  = nil { panic("StartCPUProfile fail :" + err.Error()) } defer pprof.StopCPUProfile() memFile, err := os.Create(methodName + "_mem.out") if err != nil { panic(err  .Error()) } defer pprof.WriteHeapProfile(memFile) } t := time.Now() m.Call(make([]reflect.Value, 0)) fmt.Println(methodName, time.Now().Sub  (t)) } 

php:

 <?php $startTime = microtime(true); $arr = array  (); for($i=0;$i<1000000;$i++){ $currentTime = time(); $key = $i . "_" .$currentTime; $arr[$key] = $currentTime; }  $endTime = microtime(true); echo ($endTime - $startTime) * 1000 . "ms\r\n";

Node.js

 let startTime = new Date().getTime(); let arr = {}; for (let i = 0; i <1000000; i++) { let currentTime = new Date().getTime(  ); let key = i + '_' + currentTime; arr[key] = currentTime; } let endTime = new Date().getTime(); console.log((endTime - startTime) + "ms");

Python

 #!/usr/bin/python import time import datetime def currentTime(): return int(round(time  .time() * 1000)) startTime = currentTime() arr = {} for i in range(0, 1000000): cTime = currentTime() key = str(i) + '_' + str(cTime) arr[key  ] = cTime; endTime = currentTime() used = str(endTime - startTime) print (used, "ms \r\n") 

Rust:

 use chrono::prelude::*; use std::collections::HashMap; extern crate chrono; fn main() { let start_time = Local::now().timestamp_millis();  let mut i = 0; let mut arr = HashMap::new(); while i <1000000 { let current_time = Local::now().timestamp_millis(); let key = format!("{}_{}",  i, current_time); arr.insert(key, current_time); i += 1; } let end_time = Local::now().timestamp_millis(); println!("{}ms", end_time - start_time); } 

Test results under virtual machine (Centos 8.2):

 [root@bogon tmp]# ./test -args 0 f V0 614.601327ms [root  @bogon tmp]# ./test -args 1 f V1 330.952331ms [root@bogon tmp]# ./test -args 2 f V2 256.896306ms [root@bogon tmp]# php test.php 300.05478858948ms [root@bogon tmp  ]# node --version v14.15.0 [root@bogon tmp]# node test.js 1861ms [root@bogon tmp]# python3 test.py 1563 ms [root@bogon tmp]# cargo run Finished dev [unoptimized + debuginfo]  target(s) in 0.01s Running `target/debug/tmp` 3351ms 

This comparison test cannot represent the quality of the language. Friends who have ideas can leave a message below

Adopt JetLua’s suggestion and use Date.new() to read the time directly, saving a time object construction.

 let startTime = new Date().getTime(); let arr = {}; for (let i = 0; i <1000000; i++) { let  currentTime = Date.now(); // new Date().getTime(); let key = i + '_' + currentTime; arr[key] = currentTime; } let endTime = new Date().getTime(); console.log((endTime - startTime) + "ms"); 

Run:

 [root@bogon tmp]# node test.js 1605ms 

This article is from the internet and does not represent1024programmerPosition, please indicate the source when reprinting:https://www.1024programmer.com/a-small-chestnut-comparing-the-performance-of-golang-php-node-js-python-and-rust/

author: admin

Previous article
Next article

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact Us

Contact us

181-3619-1160

Online consultation: QQ交谈

E-mail: [email protected]

Working hours: Monday to Friday, 9:00-17:30, holidays off

Follow wechat
Scan wechat and follow us

Scan wechat and follow us

Follow Weibo
Back to top
首页
微信
电话
搜索