this post was submitted on 19 Apr 2024
512 points (98.1% liked)

Programmer Humor

19176 readers
952 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 42 points 5 months ago (12 children)

I never looked into this, so I have some questions.

Isn't the overhead of a new function every time going to slow it down? Like I know that LLVM has special instructions for Haskell-functions to reduce overhead, but there is still more overhead than with a branch, right? And if you don't use Haskell, the overhead is pretty extensive, pushing all registers on the stack, calling new function, push buffer-overflow protection and eventual return and pop everything again. Plus all the other stuff (kinda language dependent).

I don't understand what advantage is here, except for stuff where recursive makes sense due to being more dynamic.

[–] [email protected] 49 points 5 months ago (1 children)

They aren't talking about using recursion instead of loops. They are talking about the map method for iterators. For each element yielded by the iterator, map applies a specified function/closure and collects the results in a new iterator (usually a list). This is a functional programming pattern that's common in many languages including Python and Rust.

This pattern has no risk of stack overflow since each invocation of the function is completed before the next invocation. The construct does expand to some sort of loop during execution. The only possible overhead is a single function call within the loop (whereas you could have written it as the loop body). However, that won't be a problem if the compiler can inline the function.

The fact that this is functional programming creates additional avenues to optimize the program. For example, a chain of maps (or other iterator adaptors) can be intelligently combined into a single loop. In practice, this pattern is as fast as hand written loops.

[–] [email protected] 14 points 5 months ago (1 children)

A great point in favour of maps is that each iteration is independent, so could theoretically be executed in parallel. This heavily depends on the language implementation, though.

[–] [email protected] 3 points 5 months ago (1 children)

Technically this is also possible with for loops, like with OpenMP

[–] [email protected] 3 points 5 months ago (1 children)

Imperative for loops have no guarantee at all that iterations could be executed in parallel.

You can do some (usually expensive, and never complete) analysis to find some cases, but smart compilers tend to work the best the dumbest you need them to be. Having a loop that you can just blindly parallelize will some times lead to it being parallel in practice, while having a loop where a PhD knows how to decide if you can parallelize will lead to sequential programs in practice.

[–] [email protected] 2 points 5 months ago

While you do have a fair point, I was referring to the case where one is basically implementing a map operation as a for loop.

load more comments (10 replies)