I imagined that the space stations have the equipment to take all measurements and record evidence so you aren't having to return samples to earth except for a neat souvenir.
Climate obsessed people are probably in it for the right reasons but are hysterically annoying from any third party perspective. Keep telling people that cow burps are what is driving bad weather and you can keep being baffled that "deniers" hate any of the solutions you come up with there after. People hate pollution across the board, people hate theoretical models of climate effects based on research divorced from reality that predict weather in 30 years to be a doomsday scenario because of something so obviously mundane while ignoring outsourcing manufacturing to china and india to bus, boat and flight. At this rate we may one day in our lifetime be able to have all the pigs and cattle raised in china and fly them on business class to europe in order to fight the weather via carbon credits.
I've been experimenting with using Tide, sqlx and askama and after getting comfortable, it's even more ergonomic for me than using golang and it's template/sql librarys. Having compile time checks on SQL and templates in and of itself is a reason to migrate. I think people have a lot of issues with the life time scoping but for most applications it simply isn't something you are explicitly dealing with every day in the way that rust is often displayed/feared (and once you fully wrap your head around what it's doing it's as simple as most other language aspects).
I've used chromecast to power all my "dumb" tvs for years and being able to use my laptop or any phone that's on the wifi has been amazing to avoid using a clunky roko or firetv interface. Sad to see one of the most personally useful pieces of google tech ending.
Many moons ago I wanted to do something similar for AI data sets and models over IPFS. I don't know the future for IPFS but I do hope the essence of a p2p data sharing infrastructure becomes more accessible to help individuals tackle some of the issues with large datasets with less hardware on hand.
The point is that the article states "averaged across 4 diverse customer tasks, fine-tunes based on our new model are slightly stronger than GPT-4, as measured by GPT-4 itself" and then proves it with nothing tangible, just the 4 selected metrics where it performs the best. I mean obviously a finetuned 7B LLM could perform, let's say, text summarization well. The question is what happens if that text contains code, domain-specific knowledge where some facts are less relevant than the other, etc., and that isn't going to be answered by any metric alone. Fundamentally, with enough diverse metrics, each based on a different dataset, the one with the biggest overlap of the dataset for finetuning will perform really well, and the rest, well, not so well.
Bsically, the statistic means that there's a set of data for which that particular (finetuned) network performs slightly better than GPT-4, and everywhere else, pretty bad. It's just not generalizable to everything while GPT-4 is. It's just as good as saying "calculators outperform GPT-4 at counting". Like, yes, they probably do, but I would like to see - is it applicable and practical, or did you just train a LLM to write all the names in Polish alphabetically really well? And that's why qualitative approach for evaluation LLMs is just better.
> Also I'm curious what would be the legal impacts on it (since USA and EU refers to a model's FLOPs/number of parameters... How do you compute it with sparsity? Do you average?).
How/when did these types of regulations come about? This feels like an insane thing to have to keep in mind while developing.
The EU messed up with the GDPR - they should have implemented it at least a decade earlier and ignored the lobby which lead to the cookie banner instead of either an outright ban on tracking for all but a tiny number of purposes. Such a ban would have had a negligible impact on the tech industry financially but would have had huge privacy rewards.
They're trying to get in early on AI so as not to make the same mistake again. Which might result in them making the opposite mistake.
Making the world a worse place? If you look carefully you’ll realize most of the harms and negative effects of technology are due to it being primarily funded by advertising and trying to maximize ad revenue.
I see again and again this non-argument on HN. Yes, if you get robbed but not killed then it is a better outcome than getting killed but this doesn't make robbing good by any measure.
But what if you make the punishment for robbing harsher than murder? Maybe people start killing you after robbing you to get a lesser sentence. It happens in some parts of the world, if they accidentally hit you with their car they'll run over you again to finish the job because if you sue or go after them it'll be real bad.
Point is we have to be careful about how we regulate things or we can shift things in an even worse direction.
Mobile games are only harmful to a relatively tiny group of addicted gamers, while internet ads have very serious consequences acting on society as a whole.
I don’t think mobile gaming companies have a potential to destroy free press, or negatively affect mental health of wide population of teenagers, or invade privacy of billions of people. They simply don’t have the scale for any of that.
Ads are harmful, no doubt, but I do not think they are more harmful than the normalization of gambling in our society.
'I watched an ad, and then [my entire life was destroyed]' is quite hard to imagine, unless it's an ad for an MLM, crypto, entrepreneurship scam, or gambling.
On the other hand, I absolutely know people who started out in soft gambling who then proceeded to throw their life (and sometimes families) away trying to catch the next high with higher and higher stakes gambling until they lost everything, and then some.
We also don't really know the impact gambling is going to have in the near future. Loot boxes, online gambling, internet celebrity gambling, etc. really only became popular around ~2010 or later, and the kids who have been growing up with low-risk gambling as a daily accessible thing on their iPads have not come into adulthood yet.
> Mobile games are only harmful to a relatively tiny group of addicted gamers, while internet ads have very serious consequences acting on society as a whole
It is still unethical to even play "free"-to-play games. You are entertained at the expense of a small group of addicts that are often spending more money than what they can afford, and, at least in many games, just being logged in helps create a nicer environment that lures in those people. If you are not there to be a whale you are there to be lure for them. It might not be harmful to you to play, but you are being harmful to the addicts.
All those mobile games frequently require advertising in the first place to race their customers/victims. We should definitely ban a lot of the dark patterns which would coincidentally improve AAA games which use similar patterns (eg increasing duration of gameplay because of grinding mechanics).
And the largest benefit of modern technology comes from the fact that so much of it is "free" (ad-supported). Without ads, there would simply be no effect at all.
Wikipedia and stack overflow and forums like reddit and chat and similar are the biggest benefits of the internet and they are very cheap to run, you could run them based on donations. Reddit is more expensive than it has to be since they try to pivot to more ads and media, but a text forum is very cheap.
The biggest benefit from ad supported tech are search and video, the rest would be better without ads. Reddit would be a better place if they didn't try to get ad revenue etc, in those cases them chasing revenue makes user experience worse instead of better.
I don't have the study at hand but this was proven false: the impact was negligible (% points) as the fundamentals are extremely good for the big platforms. Take FB and Google: they already have extremely strong (and legitimate) profiles of users without following you around the web.
> How/when did these types of regulations come about?
I can't say much about US. As I see it, EU pretty much copied US about that part. There was nothing related to computation in the EU's AI Act projects until few months ago, it was purely a "what kind of data processing are you allowed to do?"
"Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:
(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations[...]"
Probably I did this wrong but I’m getting an approximation of 300K H100s completes that in a month. At least they choose something fairly large it seems. Not sure how LoRA or other incremental training is handled.
Depends on which spec you used, since the law doesn't specify the floating point width. If you used FP8 ops on the H100 SXM then a single GPU would hit the limit in 25265285497.72612 seconds. 300,000 GPUs would pass 10^26 FP8 ops in 23 hours.
Are they trying to bring back SIMD-within-a-register? Though that only gives you ~one order of magnitude doing packed 4-bit stuff with 64-bit GPRs. And perhaps fixed-point, sign-exponent and posits are unregulated.