DEI is about what you "are", and reduces what you do/did/can/... to stuff that was decided when you were born. In a perfect "DEI" world, your status would be decided the moment you are conceived and would be utterly unchangeable. It's fundamentally a status game. It's about "blood", what blood, skin, hair, ... you're born with.
DEI has advanced in society to the point that, ironically, it's not tolerant of other viewpoints. And the ideology doesn't see this as a problem, in fact like all zero-merit ideologies (meaning they don't value individuals at all, and certainly not on personal achievement), they're actively proud of being bullies. Except this is a bullies club where entry is based on your genes.
DEI as "let's make a meritocracy more meritocratic" is great. Fantastic even. And it certainly was a necessity. DEI appears to not be that, anymore.
And even this is disregarding that DEI is at least partially a coverup for dumb spending cuts by government. Or that it's failing. The advancement of DEI is certainly not making the poor better off.
Specifically: here's an article where Rachel Dolezal clearly states she would like to identify as black, to escape her parents' upbringing. She was accused of being a "race faker", which to me indicates, less-than-subtly, that changing your race is not accepted by DEI groups.
(Ironically the "reason" this is unfair strikes me as paticularly ironic. She claims to take on a black racial identity to escape the suffering her family inflicted on her. This suffering appears real, and there's certainly enough elements to indicate it's an ongoing issue in her life even now. According to the "African American community", this is unfair, because they suffered centuries ago, but not personally)
Isn't IQ both highly inherited and highly correlated with achievement? And if both those are true then isn't the idea of meritocracy ultimately mostly still down to luck of birth?
This is one of those things that we can't really talk about anymore. There are racist connotations, ironically to both sides of the argument.
To say it has a strong genetic component could be construed to mean certain races have IQ differences.
To say it does not have a genetic component can be construed to saying current members of races are largely responsible for their own IQ, which is denying the effects of unequal treatment of races.
As for science, there's a problem with heritability. Do you mean genes? Or do you mean "average" attitudes to the intelligence and education of children, which can correlate with race and location? Or do you mean environment in other ways (such as money and available resources, divorce status of parents, ... which again correlates with race)?
Genes alone (from split up twin studies), explain about half the variability in IQ (which translates to about 4 IQ points). The environment you had as a child explains the rest.
DEI isn’t only focused on race. And even if so, if there are more statistically distributed poor people of one race getting by than another, that is a problem.
> Black and Hispanics make up just 14 percent of students admitted from outside the automatic threshold, even though they make up 60 percent of Texas high school graduates. Meanwhile, white and Asian students make up 73 percent of non-automatically admitted students, while they make up 39 percent of Texas high school graduates overall.
How do they legally ban DEI offices? Aren’t universities basically businesses in the US? (Not an American.) Even if not, aren’t they allowed to do as they see fit?
Really surprised the government has the ability to do this, but I’m not super well educated here.
IANAL, but US states tend to have broad powers while the federal government is specifically limited by the constitution on what they can do. The supreme court could later find that these states banning DEI is illegal per the constitution, but for now the states typically have the authority to do as they please as long as it's not something already found to be going against the constitutional clauses that grant citizens rights.
That reads a bit confusing, but basically the constitution exists to give citizens rights and specifically limit the federal government. States can do more than the federal government but can't trample on the rights of citizens.
Also, "public universities" (the ones this affects) in the US are state-funded. If parent was from the UK, believe there's some public/private terminology difference.
But wouldn’t the blockchain give a decentralised record of ownership that was protected from corruption? If a single entity owns this database, how does it fare against attacks/bad actors, or highly capitalists companies like EA going in and editing it in their best interests.
You need a truly good organisation to host that single DB and I’m not sure I tryst any of them at the moment.
(Also thought I have a cursory knowledge of blockchain technology, so please correct me if I’m off.)
Let’s say such a blockchain did exist. You buy Battlefield or some other game, years pass and EA bans your account. You want to download the game again. How are you going to do that? Sure, the blockchain is public, but EA can just maintain their own list of people to ban.
Okay, so maybe we pass a law that says that if you have a record on that blockchain companies must let you access your digital goods. You take EA to court for banning you despite having purchased the game. What’s the advantage over a database run by the government? The court trusts the government database, since they are the government, and you and EA have no choice but to listen to the court — they’re the ultimate trusted third party.
The issue is that you need to talk to EA servers to play the game, or whatever the movie is hosted etc, and at that time they can choose to accept whatever token you have. Or they can reject it. So unless the content is also decentralized you are not better off.
Easy example is that nft tokens are decentralized, but the servers hosting the actual pictures are not. So you own the token but the picture itself can go missing.
After the Ethereum DAO exploit, the result was two blockchains: Ethereum Classic and Ethereum. Was that better protection against database manipulation than the existing legal system provides?
Yes, it’s much more democratic than the status quo in traditional online payment and digital asset systems. Users forked and had the choice to follow the DAO-revert chain. Another hard fork happened recently: PoS. There is still a PoW chain, for the small minority that wants to use it.
We have the equivalent of HOAs in Norway, I think, for large buildings. You usually buy, not rent, an apartment, and the building has a board, or group of residents that manages the property, carries out infrastructure work (sewer and water plumbing, facade repairs, common area maintenance).
Usually when you buy the apartment you’re made aware of the outstanding balance on the building’s maintenance fund and everyone pays an equal share of this each month.
Sometimes they get a little picky on things but never to the craziness that I see in the US. Mostly it’s handled with common sense and the lives (and finances) of the residents in mind.
Most of these situations in the US are handled with common sense to the benefit of the residents. It's only the crazy overstepping situations you hear about because "community continues to exist peacefully" doesn't make the news.
It’s essentially the same in the US for a large, multi-tenant building. Though in the US an apartment typically refers to a unit in a building with a single owner that rents the units and a condominium is a building where all the units are individually owned. In the latter case most states and/or municipalities require a condo association, much as you described, to jointly maintain the common elements of the building. Whereas in the case of an apartment building it’s much simpler as the owner of the building is responsible for all maintenance.
> Two, they look more like models than software engineers. Completely bullshit.
Can I push back against this? It’s toxic as fuck. Being a woman in tech blows. You can’t be pretty or you’re considered incompetent and shallow. You can’t be ugly or you’re treated poorly for that.
It’s a loose-loose.
What does a software engineer look like to you, then?
Despite you getting some bad feedback on this comment, I think you're right and I bumped on that phrase too.
This reminds me of a billboard campaign to hire developers that ran a few years ago, and featured a few SEs, including one woman considered attractive. A lot of people said things along the lines of "this isn't what a real SE looks like", along with many... less nice things. Of course, she actually was a full stack engineer at that company. This prompted her to start a campaign (#ILookLikeAnEngineer). For more details read her blog: https://medium.com/the-coffeelicious/you-may-have-seen-my-fa...
Note to some commenters: I'm not saying that you can't tell a fake profile from a not fake profile. I'm saying that a sentence like "they look more like models than software engineers" is implicitly hinting that software engineers are not attractive females. That's the objection that I believe teux was raising, and it's a valid one.
You and GP both need to take a step back and consider the context since both of you seem to conveniently forget one very important thing: why would anyone get harrassed by software developers on LinkedIn as just another workerbee? Now multiply that by whatever factor you believe a software developer is a gorgeous woman. Maybe now it makes sense why people are skeptical.
With your line of reasoning, people would be defending phishing emails because "some representatives have bad grammar skills". Think of how lubricous that would be in a discussion on phishing.
I'm specifically saying that saying a phrase like "they don't look like software developers" just because they're good looking women is in itself a bad (and false!) meme to circulate, which in itself is a bad thing, regardless of context.
Yeah. That’s all I was trying to say. I’m not speaking about anything related to shitty recruiters, how hard it is to be a man online (??), or how much it sucks to be the target of sex-based scams.
Just. Saying it sucks to be a woman in tech and statements and sentiments like this are part of the reason why.
Still no one answered my question: “What does a software engineer look like then?”
I think it was a combination of me being not explicit enough and touching on an obviously sore subject for others. But I don’t have the energy to unravel it or debate.
The reason that I ask is that your account is pretty new and this seems a little bit like stowing up discord on social media.
Regardless. There is a difference between a beautiful woman that can code (these are actually getting more and more prevalent) and the trashy looking profile picture that all of us instantly know is some kind of spammer or scammer in our DMs or our inbox.
The fact that they're portrayed as woman has more to do with the loneliness of many men in tech than anything else and pretending that this type of behaviour is non-existent is childish when it's clear that places like LinkedIn and Twitter could do a lot more to detect it and prevent it.
Well I'll take you at your word then. Be aware that new accounts or accounts without much comment history are treated with more suspicion on HN when they discuss hot button issues like gender or politics.
> It’s toxic as fuck. Being a woman in tech blows. You can’t be pretty or you’re considered incompetent and shallow. You can’t be ugly or you’re treated poorly for that.
One of the best programmers I've ever worked with was a woman. Former Google. I'd prefer not to discuss how attractive she is because it doesn't matter to me. I'm sorry you're having a rough time in Scandinavia and, from your comment history, formerly the USA. I've worked with non-aggressive teams from different places, but I've also observed women being mistreated in both Canada and the USA as well.
That said, I've seen ASD men mistreated as well. Unfortunately there are jerks out there and we can all try to do better.
can i push back on that? as a guy, girls texting you randomly on any platform is almost always a bot, scammer, <insert something other than what it appears to be>.
i completely agree with you, a software engineer looks like a person who writes software. but that doesn't change the online experience that i have had as a man, which is namely, if a girl texts you, she's not a girl, and she certainly isn't your friend.
The average woman is fairly well treated and not at all like the average LinkedIn stalker, let alone a model. You're trying to bridge two related but different contexts
ok but do you really need to look like a model on Linkedin ? You can be pretty, but is being ultra sexy really that important there ?
He didnt say they were random girls like we meet every day, he said model. When men contact me on linkedin for insurance products and they look like Calvin Klein underweare models, Im like "hum" too.
There’s some tutorials but honestly the best thing is to just use them.
Write an image processing routine that does something like apply a gaussian blur to a black and white image. The c++ code for this is everywhere. You have a fixed kernel (2d matrix) and you have to do repeat multiplication and addition to each pixel for each element in the kernel.
Write it in C++ or Rust. Then read the Arm SIMD manual, find the instructions that do the math you want, and switch it over to intrinsics. You are doing the same exact operations with the intrinsics as the raw c++. Just 8 or 16 of them at a single time.
Run them side by side for parity and to check speed, tweak the simd, etc.
* Edit: you also have to do this on a supported architecture. Raspberry pi’s have a neon core at least in the 3’s. Not sure about the 4’s but I believe so too!
Intel's Intrinsics Guide is exactly what I used and it was before I learned about Compiler Explorer.
I already had a large and thorough suite of unit tests for inputs and expected outputs, including happy and sad paths. So it was pretty easy to poke around and learn what works, what doesn't.
It was definitely time intensive (took about three months for about 50 lines of code) but it also saved the company a few million dollars in hardware (DNA analysis software to compare a couple TiB of data requires a _lot_ of performance). I have since moved to a different company, partly because I never saw a bonus for saving all that money.
The intrinsics guide does good to show what's available but it does not do a good job of documenting how each instruction actually works... many intrinsics are missing pseudocode and some pseudocode can have ambiguous cases. I used GDB in assembly mode to compare that table against the register content instruction-by-instruction to figure out where I misunderstood something if something went awry.
Frustratingly, some operations are available in 64-bits but not bigger, some in 128-bits but not bigger, etc. So I wrote up a rough draft in LibreOffice Calc with 64, 128, and 256 columns to follow the bits around every intended operation. I then correlated against the intrinsics guide to determine what instructions are available to me in what bit sizes. For a given test run, each row in the spreadsheet was colored by what the original data contained, another row for what I needed the answer to be for that test case, then auto-color another row's cell green or red if the register after a candidate set of instructions did or didn't match the desired output. Any time I had to move columns around (the data was 4-bits wide), I'd color a set of 4 columns to follow where they go during swizzling.
Thanks. It would be simpler if I were working on only one platform that I knew supported a specific set of instructions. But, even though the code does involve some convolutions and things that are a good fit for SIMD, it needs to be cross-platform, so that the intrinsics should compile to SSE/AVX on Intel and NEON (?) on ARM where possible, but to something slow but workable on older chips. Delineating and illustrating using the most cross-platform intrinsics is what I'm looking for guidance on.
Bad news. For SIMD there are not cross-platform intrinsics. Intel intrinsics map directly to SSE/AVX instructions and ARM intrinsics map directly to NEON instructions.
I beg to differ :) std::experimental::simd has a very limited set of operations: mostly just math, very few shuffles/swizzles. Last I checked, it also only worked in a recent version of GCC.
We do indeed have cross-platform intrinsics here: github.com/google/highway. Disclosure: I am the main author.
Not really, unfortunately, and it’s a pre-existing framework for teaching a class, so simplicity of compilation is extra important. Also if I try to isolate the SIMD bits in C++ I’ll lose the opportunity to have them be inlined which will defeat the optimization purpose.
> Also if I try to isolate the SIMD bits in C++ I’ll lose the opportunity to have them be inlined which will defeat the optimization purpose.
Agreed. Usually the interface would be something like RunEntireAlgorithm(), not DotProduct().
> For those that are new to this, can you give an example of a kind of computation or algorithm which is well-served by your project but not possible with vector extensions
Sure. Vector extensions are OKish for simple math but JPEG XL includes nontrivial cross-lane operations such as
transpose and boundary handling for convolution.
__builtin_shufflevector requires a known vector length, and can be pessimized (fusing two into one general all-to-all permute which is more expensive than two simple shuffles).
Also, vqsort (https://github.com/google/highway/tree/master/hwy/contrib/so...) almost entirely consists of
operations not supported by the extensions, and actually works out of the box on variable-length RISC-V and SVE, which compiler extensions cannot.
Just a heads up, as far as I know that’s more of a porting/learning tool than a production tool.
I remember us looking deeply into this and decided to hand write the SSE intrinsics. They usually map 1:1 but we had some unexpected differences in algorithm output between the x86 binary and the ARM binary when compiled with this.
But this was also back in 2019 or so, maybe it’s better now!
I often hand write neon (and other vectorised architecture) intrinsics/assembly for my job, optimising image and signal processing routines. We have seen many many 3 digit percentage speedups from bare c/c++ code.
I got into the nastiest discussion on reddit where people were swearing up and down it was impossible to beat the compiler, and handwritten assembly was useless/pretentious/dangerous. I was downvoted massively. Sigh.
Anyways, that was a year ago. Thanks for another point of validation for that. It clearly didn’t hurt my feelings. :)
I never come across people in the wild that actually do this also, it’s such a niche area of expertise.
It also slightly annoys me a bit the things JIT people write on their github READMEs about the incredibly theoretical improvements that can happen at runtime, yet it's never anywhere close to AOT compilation. Then you can add 2-3x on top of that for hand-written assembly.
I do wonder whats going on with projects like BOLT though. I have seen it was merged into LLVM, and I have tried to use it but the improvement was never more than 7%. I feel like it has a lot of potential because it does try to take run-time into account.
> BTW 7% is huge, odd that you would describe it as "only".
It depends on what you're doing and how optimized the baseline performance is. In my area (CRDTs) the baseline performance is terrible for a lot of these algorithms. Over about 18 months of work I've managed to improve on automerge's 2021 performance of ~5 minutes / 800MB of ram for one specific benchmark down to 4ms / 2MB. Thats 75000x faster. (Yjs in comparison takes ~1 second.)
Almost all of the performance improvement came from using more appropriate data structures and optimizing the fast-path. I couldn't find an off-the-shelf b-tree or skip list which did what I needed here. I ended up hand coding a b-tree for sequence data which run-length encodes items internally, and knows how to split and merge nodes when inserts happen. CRDTs also have a lot of fiddly computations when concurrent changes edit the same data, but users don't do that much in practice. Coding optimized fast paths for the 99% case got me another 10x performance improvement or so.
I'd take another 7% performance improvement on top of where I am, but this code is probably fast enough. I hear you that 7% is huge sometimes, and a smarter compiler is a better compiler. But 7% is a drop in the bucket for my work.
7% is in the ballpark of the speedup most programs get from changing the allocator to not give almost every allocation with the same huge alignment and around half the speedup most programs get from using explicit huge pages. These changes are both a lot easier, but e.g. Microsoft doesn't think it's worthwhile to allow developers to make the latter change at all, over 26 years after the feature shipped in consumer Intel CPUs.
Statistics are worthless alone, at the end all that counts is the arena of performance and what the code becomes and how it runs against the handcrafted version.
Godbolt doesn’t accurately show runtime speed of algorithms on input data, which is what you need when discussing simd performance. And often these are proprietary industry algorithms that are the core of a business’s model.
I’m all for transparency but I’m also not about to get fired for posting our kernel convolution routines, or least squares fit model.
> It should be part of these discussions to proof what you claim
Further - these aren’t subjective claims that need to be proven on a forum for legitimacy. It’s the literal state of vector based optimisations in the compiler world right now. It is a hard problem and for the time being humans are much better at it. This is quite a large area of academic research at the moment.
If someone is so uninformed of this domain that they don’t know this, the burden is on that person to learn what the industry is talking about. Not the people discussing the objective state of the industry.
Godbolt takes practice to read. Often people who are incapable of understaning when you can beat a compiler cannot also be shown a Godbolt snippet in good faith.
This is always deeply frustrating. You quickly get the sense that the person you're talking to hasn't experienced anything beyond simple float loops that are trivial for the compiler to autovectorize, or really bad examples of hand vectorization.
In the meantime, I constantly encounter algorithms that compilers fail to vectorize because even single vector instructions are too complex for the compiler to match, such as saturating integer adds. The compiler fails to autovectorize and the difference in performance is >5x. Even just something simple like adding up unsigned bytes, and all three major compilers generate vector code that's much slower than a simple loop leveraging absolute difference instructions.
That's even before running into the more complex operations that would require the compiler to match half a dozen lines of code, like ARM's Signed Saturating Rounding Doubling Multiply Accumulate returning High Half:
https://developer.arm.com/architectures/instruction-sets/int...
Or cases where the compiler is not _allowed_ to apply vector optimizations by itself, because changes to data structures are required.
It is that! As I stated in another comment, this niche ended up saving the company literally millions of dollars in hardware costs. To be fair, it really should only be done in highly performance-critical situations and only after an initial implementation is already written in pure C++ with thorough unit testing for _all_ input/output cases.
The company I wrote the hand-optimized code does DNA analysis. I worked there mostly because I needed a job. DNA analysis certainly isn't within my passion domain.
Now I work for a company to writing software to control drones. It doesn't pay as much as I want, but it's at least fun and there's a ton of unsolved automation problems at the company. And we're hiring like crazy -- from 6 people to 300+ in the past two years, and there's no sign of slowing down. And I'm now a manager too... that's the really scary part lol
> I got into the nastiest discussion on reddit where people were swearing up and down it was impossible to beat the compiler, and handwritten assembly was useless/pretentious/dangerous.
It _should_ be useless (for some reasonable definition of "should") — it just isn't in practice. And I'm continually amazed at how often people confuse one for the other, across all contexts. E.g. I have family members who refuse to consider that our justice system might have deep flaws, because to their mind if it should be some other way then it already would be.
Isn't compiler optimization np complete? I don't think I'd put anything there in "should". Yeah any single optimization (or permutation thereof) can be applied, but they're order-dependent and the combinatorial explosion means you can't try to apply all of them.
Or any other highly optimised numerical codebase. From a quick glance at OpenBLAS, it looks like they have a lot of microarchitecture-specific assembly code, with dispatching code to pick out the appropriate implementations.
For debugging you can actually use gdb in assembly tui mode and step through the instructions! You can even get it hooked up in vs code and remote debug an embedded target using the full IDE. Full register view, watch registers for changes, breakpoints, step instruction to instruction.
Pipelining and optimisations can make the intrinsics a bit fucky though, have to make sure it’s -O0 and a proper debug compilation.
I have line by line debugged raw assembly many times. It’s just a pain to initially set up. Honestly not very different from c/c++ debugging once running.
Sure, but gdb doesn't know what the function parameters are, or on some platforms where functions start and end, crashes don't have source lines, and ASan doesn't work. (though of course valgrind does)
If you are handwriting the function in assembly, you'll know what registers hold the function parameters, what types of values they are supposed to be, and with care, you can produce debug information and CFI directives to allow for stack unwinding, it's just annoying to do - but that's just the tradeoff you make for the performance improvement I suppose.
I don’t know if this is frowned upon or not among assembly programmers, but I often just use naked functions in C with asm bodies, which gdb will provide the args for, rather than linking against a separate assembly file.
If you write your assembly to look like C code GDB is more than happy to provide you with much of that to the extent that it can. In particular, it will identify functions and source mappings from debug symbols.
ffmpeg might have amazingly efficient inner loops (i.e. low-level decoding/encoding), but the broader architecture (e.g. memory buffer implementations, etc) is quite inefficient. Like the low-level media code it's not that each component itself is inefficient, it's that the interfaces and control flow semantics between them obstruct both compiler and architectural optimizations.
When I wrote a transcoding multimedia server I ended up writing my own framework and simply pulling in the low-level decoders/encoders, most of which are maintained as separate libraries. I ended up being able to push at least an order of magnitude more streams through the server than if I had used ffmpeg (more specifically, libavcodec) itself, even though I still effectively ended up with an abstraction layer intermediating encoder and format types. And I never wrote a single line of assembly.
There's no secret sauce to optimization: it's not about using assembly, fancier data structures, etc; it's learning to identify impedance mismatches, and those exist up and down the stack. Sometimes a "dumber" data structure or algorithm can create opportunities (for the developer, for the compiler) for more harmonious data and code flow. And impedance mismatches sometimes exist beyond the code--e.g. mismatch between functionality and technical capabilities, where your best might be to redefine the problem, which can often be done without significantly changing how users experience the end product.
> most of which are maintained as separate libraries
This is so confusing I can’t tell if you’re actually talking about libavcodec. The whole point is to combine codecs to share common code, “most” decoders certainly aren’t available elsewhere.
If you just want to call libx264 directly go ahead and do that of course. libx264 uses assembly just as much or more than libavcodec though.
I have a lot of sympathy for wanting efficient code. But let's indeed have a look:
https://github.com/FFmpeg/FFmpeg/blob/7bbad32d5ab69cb52bc92a...
There are so many macros, %if and clutter here that it's difficult (for me?) to keep the big picture in mind.
This reminds me of a retrospective of an OS/window manager written in assembly - they were great about avoiding tiny overheads, but expressed regret that the whole system ended up slow because it was hard to reason about bigger things such as how often to redraw everything, similar to what people are saying here.
To be clear: let's indeed optimize and vectorize, but better to build on intrinsics than go all the way down to assembly.
There're too many different assemblies: inline, MASM, NASM, FASM, YASM. They come with their unique quirks, and they complicate build.
Intrinsics are more portable. It's trivial to re-compile legacy SSE intrinsics into AVX1. You won't automatically get 32-byte vectors this way, but you will get VEX encoding, broadcasts for _mm_set1_something, and more.
Readability depends on the code style. When you write intrinsics using "assembly with types" style, actual assembly is indeed more readable. OTOH, with C++ it's possible to make intrinsics way better than assembly: arithmetic operators instead of vaddpd/vsubpd/vmulpd/vdivpd, strongly-typed classes wrapping low-level vectors for specific use cases, etc.
Update: most real-life functions contain scalar code (like loops), also auto-generated code (stack frame setup, back up / restore of non-volatile registers). When coding non-inline assembly, developer needs to do that manually in assembly, this can be hard to do, and may cause bugs like these https://github.com/openssl/openssl/issues/12328https://news.ycombinator.com/item?id=33705209
FFmpeg code is god-awful. A lot of it is like from 2002 and written without regards to any sort of "sanity". People who write assembly routines these days have a structure to their code, and if they overrun buffers or whatever they'll document what alignment assumptions they're making. FFmpeg will just start patching its own code at runtime because someone thought it was a good idea on Pentium processors.
I think when people measure speedups here they deduct the first 100%, e.g. if I used to be able to process 10 items per second and can now process 20 items per second you can run 200% as fast but it's a 100% speedup.
To avoid the various ambiguities here, I learned to express speedups as a ratio of the optimized throughput, divided by the baseline throughput. Or equivalently: the baseline time divided by optimized time.
4400 MB/s vs 800 MB/s = 5.5x speedup or 5.5 times as fast.
Can I ask if you know how average costs in Scandinavia hold up to that? Working in Norway atm making ~90k€ equivalent after some time, but I took the paycut from the US because of quality of life/healthcare/state pension, etc. Still, it’s a very good salary here, I’m in the top few % of all Norwegians.
But it was a huge paycut. I’m definitely happier than I was in the US but I have been considering moving a bit further south. Bumping up to 120k and getting more sunshine would be very nice.
I think Sweden might be worse in all of Europe though. 90k€ is unheard of here for developers, unless you are managing a team or a middle manager or something. Well ok, maybe its a thing in Stockholm actually. Living in a slightly smaller city, now way Jose.
I am applying currently, don't know how that will turn out, but he manager even said that Swedish developers are cheaper for the company than people from Germany, Switzerland and I think even Poland was getting bigger wages.
That’s how I justify it here at least. I mean, I make 500.000kr less than I did in the US, but 900k also puts me in the top 5% of all earners in Norway. So I have a very good quality of life.
The middle class is so compressed here that it’s a super cozy income, and I will never have to worry about anything financially or healthcare related for the rest of my life.
I just have moments of «I want more», which is when I get anxiety and look elsewhere. But honestly not sure I need it in any real way.
10-20 years of experience in software engineering means around net €30k in Sweden, even in Stockholm. Tax and social security fees are the highest you will find, which means the cost of this might be around €70k-90k to the employer.
I’m not sure about Norway but Sweden/Stockholm is in the same class as Berlin, Amsterdam, etc. I feel like Norway would be higher but I’m not certain.
Premium areas are London and Paris. But even these salaries are closer to a middle rate USA city.
Super-premium (on par with more expensive USA salaries like SF or NYC) is Switzerland. Very expensive to hire there and easily the highest salaries in Europe in my experience.
This is just based on a CoL ratings at a single place.
> Yet people blame Twitter when someone posts a tweet with hate speech. This is a dangerous direction for the world to be moving in.
Not to diminish the issue, but this is mostly an American problem as far as I see it. I realise these are also American companies, but conflating the entire world to be in danger is a bit disingenuous imo.
MANY European countries (speaking from a Scandinavian perspective) don’t suffer from this, and while there’s a danger of American policies trickling down, that has been severely diminished in the past decade as there’s a movement of all of us (that I’ve seen) sort of re-evaluating our admiration of the US that was built in the 90’s - 00’s.
From the outside this isn’t a direction the world seems to be moving in. Just more crazy US spiralling.
In Germany you can get your house searched for a tweet. Not for terrorism or any egregious crime, a viral example was because a politician has been called a penis. I believe the UK has similar issues. This is far worse than the situatuation in the US.
Not the smartest choice to make yourself identifiable, but such legislative blunders still need to be corrected.
I don't believe a house search is some trivial policing. I think the state failed again to protect reasonable rights. And yes, the hate speech legislation of Germany should be adapted to the 21st century. This won't happen politically, because society currently loves pointing fingers at small missteps. A wrong joke and you get a shit storm.
It is the usual suspects, you hate women or are a racist are the most common accusation. People really start forgetting what these qualifiers really mean. And I believe they get far too much political support and that this isn't a healthy development.
Eh I mean, there's been a rise of extremism all across Europe. It's not quite accurate to say it's an exclusively American thing. Hate speech is rising and there's a case to be made that it's because extremists find each other on the internet (Doctorow calls it a "jihadi recruitment tool" in the article).
...what if the rise in so-called "extremism" is a direct result of government policy leading to poorer outcomes for the average person. If you study history people tend to become tribal not long after life gets hard. If I was a government think tank I'd surely blame the internet long before I blamed my rent seeking laws that inordinately effect the average worker.
Unemployment is artificial! That's the mistake everyone makes here! Unemployment in the US over the last decade has been a function of ZIRP and nearly limitless fed money. It's not real. This is the same econometric snake oil they sell the country when they talk about the inflation rate. No one mentions it's a rate. For example, if we go through the next year with 8-10% real inflation they will report the inflation rate somewhere between 0 and 2%. Wow, stunning! Despite evaporating wealth they've somehow made everything look great. This doesn't even consider the unemployment number is doctored. It only includes people actively looking for work. If you got tired of looking and took 3 months off (as is custom among developers) you are no longer "unemployed" according to the fed definition.
So why doesn't that matter? Because the "employed" are not gainfully employed. They work longer hours for less wages. Longer hours because the competition pool is larger, and less wages for the same reason PLUS inflation.
The average joe may not understand this. But he certainly understands that he can't take vacations, he can't get sick, his electric, gas, and food bills have all doubled or even tripled (in some regions). Despite working, objectively, harder than ever he seems to only get further behind. This makes Joe angry. When Joe can no longer blame the government either due to perceived incompetence or manipulation by think tanks, he is soon to blame his neighbor.
This is the secret of the majority of "extremism" that makes the news. They will certainly sell it as racism, or sexism, or fascism though.
Having a job that doesn't pay enough to afford to pay rent & utilities, buy food, pay off college loans. I could go on. Just saying unemployment is low doesn't capture the nature of the working poor. Barbara Ehrenreich's book Nickel and Dimed covers this well. People who work in software and make six figures tend not to be squeezed to afford the basics, except perhaps if they have to live in one of the highest COL cities in the US, but there are millions in the developed world that hold 2-3 jobs and still barely afford living.
Fuel prices are at an all-time high, and fuel companies are posting record profits. Rent is at an all-time high. Mortgages are getting insane because we have double-digit inflation and insanely high interest rates. Saving are worthless because of double-digit inflation and insanely high interest rates not actually being passed on except where it benefits banks. Food prices are at an all-time high and farms are going to the wall because they're being paid scrap prices for everything they sell.
Unemployment is at an all-time low? That's great, but people are working their backsides off and cannot afford to eat or heat their homes.
In the UK we have had twelve years of the right-wing extremist Conservative government and their "economic austerity" to "right the ship". What this has meant in practical terms is that wages have not risen in twelve years, taxes have gone up and up and up, and public spending has gone down and down and down. We had the woefully inept Kwasi Kwarteng who blew £60 billion off the UK's economy by raising taxes on the poorest and cutting them for the richest, collapsing most people's private pension pots. We went from roughly 40,000 people in the UK using food banks in 2010 when the Tories took power to 2.5 million people using food banks in 2022 - and these are not just "poor people" who the tabloid trash papers sneer at "well somehow they can afford mobile phones and TVs, why can't they afford food" - no, there are people on £30-£40k per year, who simply cannot afford to feed their families because they have to choose whether they keep the lights on, buy food, or pay their mortgage.
The political right have over the past 20 years done incalculable damage to the world.
I can't speak intelligently about UK centre-right, but here in Aus our Liberal party (as in classical liberal, centre-right) are almost indistinguishable for our Labor (centre-left). Big spending, big government and so we are seeing many of the same problems you mentioned.
I wonder why only the "populists" speak out about these problems, and then why most populists movements are led by extremists. It's not like fascists or communists ever had or have real economical solutions for healthcare or housing or environment, yet they constantly gain votes claiming exactly that (without detailing, of course). Why are all the big traditional parties only paying lip service to the real problems of the little man, even while pretending they represent them? And no I'm not sarcastic, this is a real question which bothers me extremely.
Parties don't really represent interest of their voters. They fish for votes to obtain political power, but they represent the interests of the people who run them (and their friends and patrons) - who are, generally speaking, not their voters.
Thus, traditional parties represent the interest of some subset of the established elites. Which means that the current socioeconomic arrangement is broadly in their advantage. They know that those problems are real, but they can be only solved by giving up some part of the pie. And organizations are much more selfish than individuals, so they never give up unless they believe that the alternative is to lose even more (hence why a messy revolution somewhere else can often do wonders).
Definitely. Might have been unclear but I was trying to speak specifically about the “blame the platform not the people” mentality I was responding to.
Fully in agreement extremism is on the rise and the internet aids this.
A counterpoint is that if the companies running these platforms are run by American companies, then their efforts to combat hate speech will be disproportionately focused on English language and American hate speech. This can allow it to flourish elsewhere.