Fact? My team is using Atom, VS code, IntelliJ and vim-go for Go. There has never been a problem with tabs vs spaces. On save gofmt converts all spaces to tabs and when I open Go files in vim-go it converts them to spaces according to my settings. Same for all other editors, whether you like tabs or 7-space indent.
Rust is not an alternative to Go. Go is significantly easier to learn as it's very simple. Crystal and Nim come close but they do not have the backing of a large tech company and their ecosystems are not as mature.
Google Compute Engine instances will always be more expensive than DO/Linode because GCP offers so much more. GCE instances will only be cheaper than DO droplets if you will shut down the instances when you don't need them as you don't get charged for them when they are off unlike DO.
Is it really surprising that programmers that practise additional 10 to 15 hours a week improving their skills outside the scope of whatever project or tech stack they are limited to at work were better in this person's experience?
It would be pretty incredible if spending a lot of free time on challenging personal projects made someone a worse developer or had no impact at all.
It's not the only way to be a good developer though, that's the idea I hate. It's about teaching yourself to be a good learner, which can be accomplished through a variety of hobbies that don't even need to be tech related. If programming at home is your jam, then more power to you. Saying that any developer who doesn't code on the side is, categorically, "sub-par" is ludicrous though.
I would imagine the best employees quit because they thought it was unfair to be paid the same or almost the same as people who know and contribute a lot less. It would have been better to just pay everyone above the market by some percentage but this would not make for a sexy headline.
The best employees wouldn't give a shit what someone else is getting paid as long as they are being fairly compensated. You can always ask for more money, if you're as good as you claim you are you will definitely get it. I'm going to go with my gut and say this is just crabby, bitter griping from a few ex-employees.
I imagine you're correct about why they quit. I don't see why that has anything to do with what I wrote, any more than I understand the dig at companies seeking publicity. (What could possibly be next? A pop star doing drugs?)
curl|bash is no less secure than any script you download and run as a regular user. When you add a third party apt repository and install a deb package do you always verify that the postinst script doesn't do anything malicious?
The only mistake I see in this case is doing this over plain HTTP. Let's Encrypt is free and there is no excuse for not enforcing HTTPS for this.
It's actually possible to detect the |bash part server side and send different content than if you were simply curling, wgeting or viewing in a browser.
Not only does this mean that you could end up with a compromised system, but it also means that there's no artefact of what caused it left on disk.
I agree with your point that running third party software is always a risk, the problem here being that you can think you've done your due diligence by reading the curl output first and then doing curl|bash, but in actuality this is not necessarily the case which is what makes curl|bash such an insidious bad habit.
On Arch, installing from AUR people are told to (and generally will) inspect the install script(s) when installing something. It's part of the workflow for all AUR helpers.
> When you add a third party apt repository and install a deb package do you always verify that the postinst script doesn't do anything malicious?
Package managers have decades of work put into them. Not just the installation and verification aspects, but all of the maintainence bureaucracy required so that there is accountability and verification. As much as developers might like to think that we know more about our users than anyone else, we don't. And there's essentially no verification or reason to believe that an upstream curl|sh will work on a given distribution. People who package software are usually part of the distro community they're packaging for, and are much better at knowing how software should be packaged for that community.
Your parent comment was referring to the third-party repositories, and they're not subject to strict packaging policies and may contain just anything. So, the comparison is mostly correct - it's like downloading a random .deb file off the web (only worse, because a trusted malicious repo can override any system package with an "upgrade" - AFAIK there's no package-level signatures in dpkg).
However, not sure about the the implied conclusions (or my perceptions of them). I believe, the correct answer is that adding untrusted repositories is also dangerous and should be done with caution.
And, yes, when I add external repos, I consider a quick background check on who runs it, how popular (=trusted by others) it is, and depending on my conclusions about the trustworthiness, do audit the package contents or perform a test run in a VM. Others' mileage may vary.
Security issues aside, there's also the concern of what happens if the piped-into `bash` receives an incomplete script (usually because `curl` was interrupted, often because of poor network connectivity). `bash` will attempt to execute whatever it already got from `curl`, which can cause all sorts of problems.
How come there are as many women (47) answering this survey as trans people? Is there anything in the Rust community that is hostile to women but not trans people?
In fact trans people appear to be over-represented in this survey as 1.7% of people in it were trans compared to optimistic 0.3% of the US being transgender [0]
Hostile may be jumping to conclusions. It's a small number, and it's a self-selected sample. Small change in any of the numbers will result in big ratio change. Unless someone actually claims hostile behaviour, why expect it?
Either way, the trans population is very much overrepresented in the survey. You took one of the sources from the link which claims 0.3%. The next source which I would trust (DSM-5 stats) claims 0.005-0.014% natal male and 0.002-0.003% natal female for gender dysphoria.
The reason I said hostile is that in most discussions on Hacker News and Reddit a lot of people argue that the only reason women are so under-represented in tech is that all the guys are very hostile to women. This survey goes against that. For some reason they refuse to believe that some things are more interesting to women than men and vice versa.
> For some reason they refuse to believe that some things are more interesting to women than men and vice versa.
It could be that this is just as wrong as the "only hostility is the problem". Not everybody sees the same barriers. Not everyone faces the same challenges. We don't even know if/what's more interesting, because we're bombarded with ideas of what should be more interesting since very young age. Software engineers I know successfully raise girl geeks for example.
In different cultures (e.g. Russia), and different time periods (e.g. the 80's), the level of interest and participation from women is/was much, much higher. I would say that's a pretty objective reason for believing that it's cultural and not biological.
What I find hard to believe is that some people don't think that women get treated differently than men, or that that wouldn't matter when it comes to people choosing to participate in certain communities.
Can we avoid the overhead of starting and shutting down processes by running a single Python process and communicating using something like grpc [0] (or even JSON-RPC for maximum simplicity)?
How do web frameworks like Flask handle multiple concurrent requests? Would performance increase if we started multiple instances of this Python web server on the same machine and load balanced them? The code would be much simpler if there was no need to handle process management.
> Can we avoid the overhead of starting and shutting down processes by running a single Python process and communicating using something like grpc
Actually, that's how ErlPort works. You start a Python process and then you call some functions inside that process. How many functions you'd like to run using a single Python process is up to you - you can spawn a new worker for every call or spawn only one worker and stick to it unless it's killed somehow.
You can also register callbacks on either side (Elixir or Python) and with a bit more effort you can make Python process accept normal Elixir messages and answer in the same way.
> How do web frameworks like Flask handle multiple concurrent requests?
In my experience, they mostly rely on uWSGI and a pool of processes...
The whole point here is we need parallel processing, and Python cannot provide this in a single process due to the GIL. Flask applications depend on the workload being I/O bound, so it can achieve concurrency where parallelism is not required. If you built a process that is CPU bound in Python code in Flask, you'd find it could not achieve much concurrency at all.