Rust is a unique language in that it deallocates memory on the heap
without requiring the writer to call free,
while at the same time having no need for a garbage collector.
Someone correct me if I'm wrong but surely this is not unique - don't Objective C and Swift have automatic reference counting which accomplishes much the same thing?
* Not having to worry about trivial memory/resource allocations
ARC - Lifetimes:
* Can blow up hilariously with refcount cycles
* Incurs a runtime overhead
Lifetimes - ARC:
* For the subset of allocation patterns it handles, it does so with zero runtime overhead (i.e. invalid programs do not compile, invalid = leaks memory)
* Use extends beyond just memory allocation/deallocation (see the concurrency docs [1] for example) (note that Mozilla wrote Rust to help write Servo [2], their prototype highly-parallel browser engine, so this use case was a target from the getgo)
* Cannot encode all data structures a GC can (hence why reference counting is part of the stdlib [3])
Not quite true- invalid use of lifetimes typically means use-after-free rather than a memory leak (and if you include the standard library, then you can leak memory in safe Rust).
Its interesting to me how everyone goes to "memory leaks" as their example of memory unsafety, and then if we're talking about Rust we need to clarify that Rust allows leaks (but it makes them hard to do by accident). I guess mnemonically "memory leak" is easier to bring to mind than "dangling pointer" when talking about memory.
> Just because you CAN do something doesn't mean you SHOULD
This is hacker news, not "production code" news. Abusing things in unexpected ways is pretty much the definition of hacking.
Many people seem to be jumping on this article as if the author was suggestion a new programming pattern we should start using, rather than an interesting look at some of the lesser-known quirks of ruby. I'm pretty sure even the author would agree that triple nested method definitions are not something we should use for production code.
This is a fundamental part of the language, the idea that objects and classes are dynamic, not static.
What I would say is that although we should have an extremely good reason to employ these techniques in production, I believe that every professional Ruby programmer should be able to understand them and/or figure them out in a few minutes.
Agree with you 100%, and the ability to dynamically define methods is one of Ruby's great strengths.
However, metaprogramming is a power that should be used wisely. When implemented unnecessarily it reduces readability (and probably performance) for no real gain.
The devise source contains some great examples of metaprogramming used properly:
Readability is in the eye of the beholder. For those unfamiliar with higher-order programming, maps are just a 'too clever' form of a for loop.
A clean and understandable solution is one matching the problem being solved in a precise way. In search for simplicity one can't forget that programming is a trade, and one should be expected to actually learn some shit.
To that point, Jef Raskin famouly said that “intuitive == familiar,” and all-too-often, that’s exactly what people mean when they talk about “intuitive" code and/or user interfaces.
Indeed. Looking over the article, the first example is sort of obvious if you ever worked more than few hours with a decent dynamically typed language, and the rest expose interesting functionality that could be papered over with a macro in order to build something useful. Like, you know, object-oriented programming can be built by hiding lexical closures under a macro or two, and was in fact built that way in the past.
In other words - just because you don't understand something doesn't mean it's "clever code".
Important to note that this isn't STATIC typing (which detects errors at compile-time), rather this is more like a validation library to make sure structures have certain properties at run-time.
Not saying it isn't useful, in fact I have a project in which this would be a very good fit and I might even implement it there.
Indeed. Elm implements extensible record types that can be inferred and checked statically. Similarly to object types in OCaml, which additionally supports structural subtying of of other sorts as well (e.g., polymorphic variants). It's a little strange to compare this library with Elm.
This looks like a contract library, which I assume already exists in Clojure. It'd be interesting to see what's unique about this implementation, if anything.
There are a tonne of average-to-poor looking girls on Tinder who have very good photos of themselves that paint them in a much more flattering light than they really deserve. Take a leaf out of their book.
Tinder is the most superficial distillation of dating out there, your entire judgement of the person is based on a few photos (and often just the first one).
You need 4 photos, show them in this order:
One full body shot of you with a clear image of your face,
one of you topless (if you have the body for it),
one of you looking cool with your friends, and one of you doing something you like, preferably some sort of adventurous hobby like climbing or diving
PAY TO GET PROFESSIONAL PHOTOS DONE. I can't emphasize this enough, more than anything else this will drastically improve your results.
The 'about me' field is almost irrelevant. Keep it to 3 sentences or less. When you get a match, push for a meetup at a bar (or your house if you feel bold) as soon as possible, preferably within 24 hours. Get their actual phone number within the first five messages. THE GIRL ALREADY LIKES YOU, there is no need to play silly games at this point.
I also use a Tinderbot to automatically like all the girls. I wrote my own version based off the work of somebody who reverse-engineered the API, my repo is here https://github.com/samphilipd/tinderzapper . When I feel like going on some dates, I just run that for a few days, unlike all the unattractive matches and push for meetups with the rest. It's a pure numbers game.
BELIEVE ME there are girls out there who will find you attractive and want to meet up with you, you simply have to sift through the junk to find them.
The above is pretty much exactly what I do. I'm only slightly above-average looking and very below-average height (5ft4). I do workout so I have a somewhat decent body. I've been on countless dates this year and got laid with 10-15 girls just through tinder.
Anyone can get a date on Tinder with professionally taken photos. In fact, most of the girls on there already know the importance of photos and are very skilled at showing photos that present them in the best possible light.
If you are writing Ruby you already decided you didn't care about speed.
Don't get me wrong, I think Ruby is a great language and I use it every day to get paid, but it is not a speed queen. Ruby's strengths lie in flexibility, fast iteration, readable code and permitting a functional style.
In most practical web applications the big bottlenecks will be either view rendering or database. Choosing to put any focus at all on performance of something like parallel vs serial assignment (unless you are doing something truly pathological) is a complete waste of time.
It will have no noticeable difference to the end user and distracts from the far more important job of making your code modular, extensible and readable.
If need your code to be fast and you are running Ruby, you already lost. Use Java or a compiled language instead.
Actually it does make a practical difference but I would argue in favor of the electric razor.
Ever wonder why it itches in those first couple of weeks if you decide to grow your beard out?
Look at that picture.
Shaving with a straight-blade razor basically leaves a bunch of tiny little hypodermic needles behind. When those suckers start growing out, they curl back into your face and start jabbing you, resulting in that annoying itching sensation.
If you want to grow your beard out, take my advice and make your last shave with an electric razor, it's a lot more comfortable.
Just the user interface. It's a lot easier to fuck up during a rebase. MQ is a lot like git stash on steroids. Super easy to flit between multiple patches that I'm working on and put a change in the appropriate one. Wasn't uncommon for me to touch 3-4 patches in a random order over the course of a few minutes.
Let's say I'm working on patches 1, 2, and 3.
--
Situation: Currently on 3. I write some code and decide it should be in patch 1.
Git: Can't interactively rebase with uncommitted changes. Some sort of stash juggling.
MQ: Go down two patches. Commit the changes to the patch.
--
Situation: I'd like to introduce a new intermediate patch 1.5.
Git: Make a new commit on top of patch 1, rebase the other patches on top of it. If there's potentially conflicting merge errors they need to be addressed immediately—or I'm stuck leaving an alternate history of commit 1 around until I'm ready to deal with merge issues.
MQ: Go down two patches. Make a new patch. Deal with the merge errors whenever I go up/apply patches 2 and 3.
How is this different from the information stored in `git reflog`, which allows you to rollback to commits before their rebase? You can just as easily revert back to the old version, since it never goes anywhere.
Besides, the need to perfectly preserve history in most cases is totally overblown AFAICS, especially local history nobody else sees. I don't care if a person who submitted patches to me made 20 separate minor commits to fix minor things in some case like a code review (e.g. "fix spelling", "fix 80 column violations", "rename this thing", "clean up code a bit and make it shorter re: code review"); those are superfluous and add no meaning to the actual work itself and can be rebased/squashed away in almost all cases. If they submit 20 minor commits that are each independent of one another and isolated, that's another story.
The alternative seems to just be 'have an ugly history littered with these commits' if "rewriting history" is so incredibly dangerous/terrible like it is always implied (which it is not, because you can always recover from it with the reflog until you push). But I'd rather keep my project history clean and clear; a tidy history is just as important as tidy code IMO. FWIW, I think the OP's set of patches are clear and do not constitute an ugly history.
The actual way to 'stop rewriting history' is to disable --force pushes, which does unilaterally rewrite history for all downstream consumers. This is also true for Mercurial. Rebase does not do this, or anything close to it.
As someone who reads and writes a lot of patches, this is an exceedingly common workflow. How is Mercurial any better in this situation where I don't want all that useless information?
> How is this different from the information stored in `git reflog`, which allows you to rollback to commits before their rebase? You can just as easily revert back to the old version, since it never goes anywhere.
Patch queues make the distinction between mutable WIP patches and finished commits explicit. Also, versioned patch queues make it safe to share WIP patches.