View Single Post
(08-03-2017, 03:21 PM)
Somnid's Avatar

Originally Posted by Spoiled Milk

I agree with forced handling, I'm just saying that it's really onerous when you do it through methods and closures. And you're wrong about it being considered "uglier" - a huge amount of people in Rust clamored for the ? early return syntax and the try! macro had been in the std since the beginning. I think the RFC that added ? was the most commented-on RFC in rust history.

? itself is used in method chaining:



// becomes:

The bolded is pretty rarely the case for me. I've experienced writing subsystems that all have their own set of possible errors but which eventually bubble up to the orchestration of the main program and need to be logged or handled there. I either have to make huge unions that represent every error in the entire application or (more commonly) make separate, redundant definitions for all the subsets I plan on using and implement the Into trait (which is why try! has always expanded into "return error.into()" in the erroring case, btw). My point is that I'd much prefer starting with a large set and then writing subsets of cases as type synonyms.

I'm having trouble understanding this one. You only have to deal with one error type, the one from the function call and not other propagated exceptions from nested calls which can be unpredictable and numerous requiring things like ..catch(Exception e){ ... } to get it all. You could also have aggregate error types if you need to preserve everything or simply pass the logger down. Both are easy enough to reason about.

This is tautological. If speed doesn't matter, speed doesn't matter? I'm not saying everything has to be as fast as humanly possible, but my brain twitches when people say that the slowdown of Moore's Law + need for concurrency means that people should move to a paradigm of immutability. Speed is the sine qua non of parallelism. If it's not going to be fast, then why are you doing it? And don't underestimate the fantastic slowdown of functional datastructures. My experience is they can be an order to two orders of magnitude slower than the equivalent mutable, data-contiguous, ephemeral solution.

What I'm saying is that you need something that works and then make it fast, solve concurrency first. Sometimes you don't even need concurrency at all, a good single threaded app is just fine if your getting good performance already. A fast program that crashes periodically isn't worth much. Users eventually have to use it. I also think you're exaggerating the "slowness" of the underlying structures, it's probably more that you are very familiar with your domain and use some more specific implementations.

Also boo at "formal verification". I've never seen a functional program actually formally verified beyond what the type system already affords (which is not much except memory and exception safety). The safest programs in the world that are actually, really formally verified are being written in mutable C and Ada under DOD software engineering standards. These are your rockets, your airplanes, your infrastructure, you name it.

Memory is the biggest one because it's harder to test for. The rest can usually be handled with testing. And C was not chosen for it's verifiability.

To circle back around, yes you can do lots of things without functional paradigms but they make code easier to reason about. Somethings will look uglier when ported to languages that don't have first class support for certain features. But I'd say null propagation, exception propagation and concurrency are 3 of the biggest things that functional programming really helps with.