This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.
AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.
AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Intelligence is gathering and application of knowledge and skills.
Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"
> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.
Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.
So it's a coinflip whether it's giving you correct information or something completely made up? And now you're not digging through the obscure manuals to actually verify? Seems objectively harmful.
We're working on Interval to make doing exactly this easier. We focus on code first, well-typedness, and abstract away UIs for these tools so you can write the business logic using your own existing functions and tools and then get back to working on your actual app.
This is an extremely bad faith reduction. It is a series of programming puzzles, not invent-the-universe problems, or paraphrase-problems-to-AI problems.
Yeah but I am fairly sure it is an AI generated joke - and I chose to only laugh at human created humour. I use an AI to check for me if the joke might be AI-funny and not real-funny. Then only if it is real funny might I humour it with a chuckle. Future comedians will thank me for my contribution in protecting genuine laughs worldwide.
I do something similar, though I use the "defaults" as the starting point and add overrides in local files which aren't checked in. Something like this:
# .dotfiles/.bashrc
# shared baseline config...
if [ -f ~/.bashrc.local ]; then
. ~/.bashrc.local
fi
HN is the last bastion of free speech and intelligent conversation. Sorry to hear if you dislike people who want to defend that right adamantly, but myself and most other HNers who embrace the free speech, conservative model would politely, but firmly, show you the door.
> myself and most other HNers who embrace the free speech, conservative model would politely, but firmly, show you the door.
No one is entitled to 100% free speech protection on a moderated platform one joins by choice. If you're concerned about saying what you want without the risk of being banned, consider hosting your own forum where you have all the control. Only then you will be free.
This honestly doesn't seem like a big deal to me. Maybe I'm naive, but email addresses aren't really private these days, it was limited to only fellow members of your workspace, and they were up front about the seemingly honest mistake. I've personally made worse mistakes.
In 2022 if you enter your email into any service you should basically expect it to be public information at some point. It’s just a matter of time until something happens and it gets leaked.
Very neat! Unfortunately can't get the `curl` example to work no matter what I do (on Arch Linux).
$ pledge.com -p 'stdio rpath wpath cpath dpath flock tty recvfd fattr inet unix dns proc thread id exec' curl http://justine.lol/hello.txt
curl: (6) getaddrinfo() thread failed to start
I tried following the Troubleshooting section and looking through strace output, but unfortunately I'm not sure what I'm looking for, I see a few EPERMs for calls that I don't know what they do: rseq, set_robust_list, and sysinfo to name a few.
It works fine for me if Curl is built with Musl Libc. I can see you're using a very cutting edge glibc. I tried reproducing this on Debian 10. The only calls that got EPERM'd were set_robust_list() and prlimit64(). I recompiled pledge.com by adding those, and Curl is still failing for reasons that aren't obvious. I've encountered issues like this in the past. Me personally I've always solved this by keeping a healthy distance from Glibc by using things like Musl and Cosmo. However I want to see Glibc users benefitting from our work too! So I'd welcome contributions from anyone who can help us do that.
That could mean that the clone3 system call fails with EPERM instead of ENOSYS. Suppressing system call implementations with ENOSYS is generally safer because it just looks like an older kernel, while EPERM is an regular error code for some system calls.
Put differently, ENOSYS tells userspace that the system call isn't implemented and you need use some fallback code. EPERM means that the operation was denied by policy. But in that case, it might not be a good idea to approximate it by using different system calls because it might circumvent the intended security policy.
AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.
AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.