Hacker News new | past | comments | ask | show | jobs | submit | jacobmischka's comments login

This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.

AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.

AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.


Intelligence is gathering and application of knowledge and skills.

Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"

> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.


So it's a coinflip whether it's giving you correct information or something completely made up? And now you're not digging through the obscure manuals to actually verify? Seems objectively harmful.


Actually, that is extremely beneficial.

For example, I now use GPT3 for every single code debugging problem that I have.

I input the bug/problamatic code into the service, ask it for an answer, and then I take the code and see if it fixes my problem I was having.

It doesn't work half the time, but thats fine. Then I just figure out the bug the normal way.

But the other half the time, it immediately solves my problem, and then I run my code and it works/ does what I want.


I sincerely hope you work on some random SaaS and not any software that actually matters, because this is how you get subtle dangerous bugs.


We're working on Interval to make doing exactly this easier. We focus on code first, well-typedness, and abstract away UIs for these tools so you can write the business logic using your own existing functions and tools and then get back to working on your actual app.

[1]: https://interval.com


This is an extremely bad faith reduction. It is a series of programming puzzles, not invent-the-universe problems, or paraphrase-problems-to-AI problems.


It's a joke


Yeah but I am fairly sure it is an AI generated joke - and I chose to only laugh at human created humour. I use an AI to check for me if the joke might be AI-funny and not real-funny. Then only if it is real funny might I humour it with a chuckle. Future comedians will thank me for my contribution in protecting genuine laughs worldwide.


Programming is telling the computer to do something, using the AI to generate code is just the next step up from a compiler.


It's also been a goal of CS for some time with some estabilished terminology. eg https://en.wikipedia.org/wiki/Program_synthesis & https://en.wikipedia.org/wiki/Natural-language_programming


I do something similar, though I use the "defaults" as the starting point and add overrides in local files which aren't checked in. Something like this:

    # .dotfiles/.bashrc
    # shared baseline config...
    if [ -f ~/.bashrc.local ]; then
        . ~/.bashrc.local
    fi


This is a bizarre and blatant attempt to yell over detractors with ancient PR. Why are people upvoting this?


Oh, I guess because of the recent comma.ai story. Seems a bit less blatant now that I realize that.


That exists!


Which password manager out of curiosity? I assume the issue with the browser plugins is tricking the Password Manager into auto submitting passwords.


Seems like they're not the most immature one in this situation.


What do you expect from a user who claims to be "fighting for my life in the comments almost daily" on HN? el-oh-el.


HN is the last bastion of free speech and intelligent conversation. Sorry to hear if you dislike people who want to defend that right adamantly, but myself and most other HNers who embrace the free speech, conservative model would politely, but firmly, show you the door.


> myself and most other HNers who embrace the free speech, conservative model would politely, but firmly, show you the door.

No one is entitled to 100% free speech protection on a moderated platform one joins by choice. If you're concerned about saying what you want without the risk of being banned, consider hosting your own forum where you have all the control. Only then you will be free.


This honestly doesn't seem like a big deal to me. Maybe I'm naive, but email addresses aren't really private these days, it was limited to only fellow members of your workspace, and they were up front about the seemingly honest mistake. I've personally made worse mistakes.


If it was a workspace for your company, no it is not a big deal but there are plenty of Slack communities around topics where some ppl are anonymous.


In 2022 if you enter your email into any service you should basically expect it to be public information at some point. It’s just a matter of time until something happens and it gets leaked.


The big issue with a breach like this isn't the email itself, but the association of the email with the service.

For example, the Ashley Madison breach.


Very neat! Unfortunately can't get the `curl` example to work no matter what I do (on Arch Linux).

    $ pledge.com -p 'stdio rpath wpath cpath dpath flock tty recvfd fattr inet unix dns proc thread id exec' curl http://justine.lol/hello.txt
    curl: (6) getaddrinfo() thread failed to start
I tried following the Troubleshooting section and looking through strace output, but unfortunately I'm not sure what I'm looking for, I see a few EPERMs for calls that I don't know what they do: rseq, set_robust_list, and sysinfo to name a few.


It works fine for me if Curl is built with Musl Libc. I can see you're using a very cutting edge glibc. I tried reproducing this on Debian 10. The only calls that got EPERM'd were set_robust_list() and prlimit64(). I recompiled pledge.com by adding those, and Curl is still failing for reasons that aren't obvious. I've encountered issues like this in the past. Me personally I've always solved this by keeping a healthy distance from Glibc by using things like Musl and Cosmo. However I want to see Glibc users benefitting from our work too! So I'd welcome contributions from anyone who can help us do that.


That could mean that the clone3 system call fails with EPERM instead of ENOSYS. Suppressing system call implementations with ENOSYS is generally safer because it just looks like an older kernel, while EPERM is an regular error code for some system calls.

Put differently, ENOSYS tells userspace that the system call isn't implemented and you need use some fallback code. EPERM means that the operation was denied by policy. But in that case, it might not be a good idea to approximate it by using different system calls because it might circumvent the intended security policy.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: