Hacker News new | past | comments | ask | show | jobs | submit | bretpiatt's comments login



appreciate it, also wow his min/km is very good


Art can't wait


running 1hr/day will do that to you


Another use with barbed wire, we ran an x.25 network across it on a very large ranch with nodes 1 mile apart.

With network monitoring we could now detect breaks in the fence at a 1 mile increment to let the ranch hands know where to go and when it broke.


Did you try to use GPT 3.5? Our testing is it isn't great, using GPT 4 or some of the specialized trained versions of GPT 4 (there's one with good reviews called Lisp Engineer) our experience has been different.

It is not replacing engineers, it isn't one where you give it a broad set of requirements and it just goes and builds, it is helping increase productivity, to get folks through areas where they need to bounce ideas off of someone.

We're coding mostly in Python, C++, and .NET Core where I do expect it'll have a much deeper set of training data than it will for Lisp (and even for those languages we're getting marginally better performance from specialized engines than we are general GPT 4).

The non-OpenAI other coding AIs so far are all performing worse for us than GPT4. We've done testing against LeetCode challenges and a bunch of other things.


If only these LLMs were decent at C#. Unfortunately, they heavily lean towards very old data, would call obsolete APIs and in a style that is generally against what is considered to be idiomatic and terse.

For example, I once asked Claude 3 to port some simple XML parsing code from Go (because 10s to ask is faster than 60s to type by hand haha) and it produced this https://pastebin.com/3823LBiA while the correct answer is just this https://gist.github.com/neon-sunset/6ba67f23e58afdb80f6be868...

Functionally identical but such cruft accumulates per each single piece of functionality you ask it to implement. And this example is one where the output was at least coherent and doing its job, many more are worse.


> Did you try to use GPT 3.5?

Yes.

And succeeded! :)

> We're coding mostly in Python, C++, and .NET Core where I do expect it'll have a much deeper set of training data than it will for Lisp

I can't imagine how malformed bracketing is due to insufficient training set.

And nor it seems can ChatGPT: "I've encountered numerous examples of Lisp code during my training, covering different applications and techniques within the language. Whether it's simple examples or more complex implementations, I've seen quite a bit of Lisp code."


You tried the very first iteration of an LLM-based chat assistant, were unsatisfied with it because it couldn't match Lisp parentheses, and went on to form an opinion about the value of these tools and implicitly the intelligence of the people who use them. That speaks more to your preconceptions than it does to the state of better tools like Copilot or GPT-4.

You didn't label it (which, btw, is a faux pas), but it's obvious from your replies that this wasn't an Ask HN, it was a Tell HN. You have absolutely no interest in what the rest of us have to say.

Nevertheless, I'll try once more for luck: Basing your opinions about LLMs on your experience with GPT-3.5 is a mistake. If you don't want to use LLMs at all because you have preconceptions, that's fine, but don't pretend that you've sampled LLMs and found them lacking for professional coding when you haven't tried the professional tools.


> You tried the very first iteration of an LLM-based chat assistant

Er, V3.5 is "the very first iteration"?

> don't pretend that you've sampled LLMs and found them lacking for professional coding when you haven't tried the professional tools.

I think you misread my post. I didn't mention professional.

And my post was't about a "sample of LLMs". It was about this one in particular.


> Er, V3.5 is "the very first iteration"?

Yes. ChatGPT-3.5 was the very first LLM-based chat assistant that was announced on Nov 30, 2022 [0]. It hasn't gotten better since then, just more censored and faster.

It followed GPT-1 (which was only interesting to people who were already in the know), GPT-2 (which was neat but widely recognized as pretty useless and again, not something normal people noticed) and GPT-3 (which was cool, but didn't provide a chat interface, it could only complete texts, so it made a decent base for the early versions of Copilot).

[0] "ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022." https://openai.com/index/chatgpt/


damn, I wish I could give more than one upvote


At 260 million passengers per year, that's $6 per ticket on a 20 year payback. By my understanding this is affordable as JFK is around $15 per passenger in fees.


That also assumes that people will want to travel the same amounts in the future or that other hubs won’t also be expanding and competing. In my opinion the main problem with Dubai is there isn’t a reason to actually visit it or spend time there for most people. It’s hot, in a dusty desert, and the main attractions are luxuries like shopping. But maybe that’s why this proposal seems more based on business (shopping and logistics).


Yes Dubai is a satire, not a city. Most passengers don't stop but transit there. The reason they choose such route - Dubai is a hub that offers 1000s of routes.

My alternative is 1 extra leg to avoid Dubai or Qatar. Occasionally that works for me - I do have extra days and funds to checkout cities. Most people don't.


Yes when they run out of oil money at some point I doubt there will be much airport traffic or money to keep the airport of this size running.


Dubai never had much oil


Rough math, $1,000 for the raw stone, $5,000 of travel and time to source... $80k sale price with 250 hours of cutting work, $300/hour for master gemcutter, seems very fair.


Where did you get 250 hours from? I don't think it took him anywhere near that long based on the times mentioned in the video.

If it really did people would use CNC for this.


Yeah from the video I believe it was a few days of work.


You think this guy acquired a 740 carat raw stone for $1000?


Yes, here's an online site where you can see, for someone direct sourcing totally.

https://www.gemrockauctions.com/search?query=morganite&weigh...

1505 ct for $3999 asking https://www.gemrockauctions.com/products/1505cts-morganite-c...


Here's human drivers navigating in a similar way in a large intersection in Addis Abada (Ethiopia), I envision white light basically enabling this: https://youtu.be/VPbUpdmAfck


Why would you want to "enable" this?

What is actually being enabled, and is it more efficient than a regular green-yellow-red intersection? If you watch on 1x, this seems like a horribly slow/inefficient intersection.


the alternative is 5km long traffic jams in all 4 directions waiting for 2-3 cars to pass each green phase.


I'd love to see this in real-time rather than this sped up version. I suspect it's a little more reasonable to navigate when going slow enough.


Even at .25 play back speed it's still obviously way too fast in some cases.

And yes, it looks kinda normalize when the people walking are walking, not running.


TLDR: This is a real thing, turbines have built in protection, and there innovating on the whole system to really do this. It's a potential game changer vs. trying to build seawalls or levees.

Read more, "Taming hurricanes with arrays of offshore wind turbines", https://www.nature.com/articles/nclimate2120

...and..., "Wind Turbines in Extreme Weather: Solutions for Hurricane Resiliency", https://www.energy.gov/eere/articles/wind-turbines-extreme-w...


I'm aware of the current built in protections we deploy for land installations.

Those prototypes for hurricane prone sea installations do look promising though, especially the downwind blades.

Looking forward to seeing successful test data for these blades, it's certainly not going to be easy to withstand these forces.



Disabling cookies will cause _more_ of the "cookie prompts" to appear, not less. Some pages these days even will prevent visiting them unless they can set a cookie...

Also, cookies are not the only method of tracking which is supposed to be disabled when you hit Deny.


It's probably not this far, DALL-E / GPT4 has some issues....

https://twitter.com/bpiatt/status/1740915306359329233


The claim that this looks like modern mickey mouse is wild to me.


Doing a side by side. MM doesn't have visible front teeth, his ears are smaller and fully black, the silhouette is different. But the shape of the face and features, including where the shape of the black fur decends (at the widows peak?), the eyebrows, the eyes with white, all look quite similar. To me it's much closer to a modern Mickey than the original flat black and white.


This tweet seems like a fairly lazy combination of the recent ChatGPT copyright discussion with the Disney one.



Perspective as CEO of a backup and disaster recovery company...

A lot of folks now have ransomware protected backups for critical data so they aren't paying for decryption keys.

This has escalated to hack and release, the attackers are now exfiltrating data and threatening to make it public in addition to encrypting it on the host system.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: