I know messages e into call stack from the queue when call stack is empty. Wouldn't it be better though, if event loop could push messages from queue directly to call stack without waiting? What reasons are behind this behavior? If the event loop would push a message at an exact time we could always rely on function such as setTimeout etc.
setTimeout(() => console.log("I want to be logged for 10ms, but I will never be :("), 10);
// some blocking operations
for(let i = 0; i < 500000000; i++){
Math.random() * 2 + 2 - 3;
}
console.log("I'll be logged first lol");
I know messages e into call stack from the queue when call stack is empty. Wouldn't it be better though, if event loop could push messages from queue directly to call stack without waiting? What reasons are behind this behavior? If the event loop would push a message at an exact time we could always rely on function such as setTimeout etc.
setTimeout(() => console.log("I want to be logged for 10ms, but I will never be :("), 10);
// some blocking operations
for(let i = 0; i < 500000000; i++){
Math.random() * 2 + 2 - 3;
}
console.log("I'll be logged first lol");
It'll probably never be changed due to consistency reason but I'm still curious. Maybe I'm not seeing something, and there is the serious technical reason behind the concept of waiting for an empty stack. Do you have access to some articles about architectural decisions in JS, or maybe you know fundamental examples when this behavior is necessary? There are many articles about how JS works, but I couldn't find anything like "Why event loop works exactly that way". Any help would be greatly appreciated.
Share asked Oct 20, 2018 at 14:59 Bartłomiej GładysBartłomiej Gładys 4,6151 gold badge17 silver badges25 bronze badges 4- Possible duplicate of setTimeout behaviour with blocking code – rckrd Commented Oct 20, 2018 at 15:10
- I know why it works that way. I want to know why the architecture was implemented in this manner though – Bartłomiej Gładys Commented Oct 20, 2018 at 15:14
- Wouldn´t you need multiple threads to achieve what you are suggesting? – rckrd Commented Oct 20, 2018 at 15:28
-
setTimeout
guarantees its code isn't run until 10ms have passed. Not exactly 10ms after it is read. You should consider your question from a different angle: what does blocking operation means? ;) In order to make ticks/EventLoop/timeouts/whatever work the way you're thinking, it would indeed require multiple threads, i.e. deferring the processing control to a higher authority (like the OS). So my guess is it was implemented this way to make the necessary event-driven programming without the heavy artillery of thread-handling logic and most of its pitfalls. And to build it in 10 days. – Stock Overflaw Commented Oct 20, 2018 at 15:40
1 Answer
Reset to default 16V8 developer here. This question seems to be based on a misunderstanding of what "the call stack" is: it is not a data structure that anyone can just push things onto. Instead, it is a term for the current state of things when a bunch of functions have called each other. The only way to "push" another function onto the call stack is when the currently executing function calls it. If the event system inserted random calls at random places into your functions, that would lead to a pretty weird programming model.
You could design a programming environment that's conceptually similar, but rather than pushing anything onto the call stack, what it would do is interrupt and suspend whatever is currently executing, and execute a setTimeout
-scheduled function (or event handler, etc) instead, and resume previous execution afterwards. One issue you'd have to solve is: what if this repeats, i.e. what if the scheduled function is interrupted by another scheduled function which is interrupted by another scheduled function, and so on? What if a scheduled function takes forever to finish: when does the previously executing code get to make progress again? Also, while this can be done in a single-threaded world, getting random interruptions is concurrency (which from a consistency point of view is equivalent to parallelism/multi-threading), so you'd need synchronization primitives like locks (essentially, have a way for functions to say "don't interrupt this section" -- which in turn means you can't actually guarantee the accuracy of scheduling requests). Don't underestimate the plexity cost all this would impose on programmers: when writing code, they'd have to keep in mind that anything could get interrupted at any time, and on the flip side that any data that one function might want to process might not be ready yet because another function that produced it hasn't finished running yet.
So in short, JavaScript's event loop system is what it is because the language avoids concurrency, and randomly interrupting functions to execute others is concurrency, even on a single-threaded system.