Hacker News new | past | comments | ask | show | jobs | submit | nickreese's comments login

This is such a needed addition! Huge duckdb fan, congrats team!

I'm a bit out of the loop here, but what's the use case for DuckDB?

DuckDB is mind blowingly awesome. It is like SQLite, lightweight, embeddable, serverless, in-memory database, but it's optimized to be columnar (analytics optimized). It can work with files that are in filesystem, S3 etc without copying (it just looks at the necessary regions in the file) by just doing `select * from 's3://....something.parquet'`. It support automatic compression and automatic indexing. It can read json lines, parquet, CSV, its own db format, sqlite db, excel, Google Sheets... It has a very convenient SQL dialect with many QoL improvements and extensions (and is PostgreSQL compatible). Best of all: it's incredibly fast. Sometimes it's so fast that I find myself puzzled "how can it possibly analyze 10M rows in 0.1 seconds?" and I find it difficult to replicate the performance in pure Rust. It is an extremely useful tool. In the last year, it has become one of my use-everyday tools because the scope of problems you can just throw DuckDB at is gigantic. If you have a whole bunch of structured data that you want to learn something about, chances are DuckDB is the ideal tool.

PS: Not associated with DuckDB team at all, I just love DuckDB so much that I shill for them when I see them in HN.


I'm sorry, I must be exceptionally stupid (or haven't seriously worked in this particular problem domain and thus lacking awareness), but I still can't figure out the use cases from this feature list.

What sort of thing should I be working on, to think "oh, maybe I want this DuckDB thing here to do this for me?"

I guess I don't really get the "that you want to learn something about" bit.


If you’re using SQLite already, then it’s the same use case but better at analytics

If you’re using excel power query and XLOOKUPs, then it’s similar but dramatically faster and without the excel autocorrection nonsense

If you’re doing data processing that fits on your local machine eg 50MB, 10GB, 50GB CSVs kind of thing, then it should be your default.

If you’re using pandas/numpy, this is probably better/faster/easier

Basically if you’re doing one-time data mangling tasks with quick python scripts or excel or similar, you should probably be looking at SQLite/duckdb.

For bigger/repeatable jobs, then just consider it a competitor to doing things with multiple CSV/JSON files.


I’m not the person you asked, but here are some random, assorted examples of “structured data you want to learn something about”:

- data you’ve pulled from an API, such as stock history or weather data,

- banking records you want to analyze for patterns, trends, unauthorized transactions, etc

- your personal fitness data, such as workouts, distance, pace, etc

- your personal sleep patterns (data retrieved from a sleep tracking device),

- data you’ve pulled from an enterprise database at work — could be financial data, transactions, inventory, transit times, or anything else stored there that you might need to pull and analyze.

Here’s a personal example: I recently downloaded a publicly available dataset that came in the form of a 30 MB csv file. But instead of using commas to separate fields, it used the pipe character (‘|’). I used DuckDB to quickly read the data from the file. I could have actually queried the file directly using DuckDB SQL, but in my case I saved it to a local DuckDB database and queried it from there.

Hope that helps.


My dumb guy heuristic for DuckDB vs SQLite is something like:

  - Am I doing data analysis?
  - Is it read-heavy, write-light, using complex queries over large datasets?
  - Is the dataset large (several GB to terabytes or more)?
  - Do I want to use parquet/csv/json data without transformation steps?
  - Do I need to distribute the workload across multiple cores?
If any of those are a yes, I might want DuckDB

  - Do I need to write data frequently?
  - Are ACID transactions important?
  - Do I need concurrent writers?
  - Are my data sets tiny?
  - Are my queries super simple?
If most of the first questions are no and some of these are yes, SQLite is the right call

Wow... sounds pretty good... you should be doing PR for them... I might give it a try, sounds like I should.

On way to think about it is SQLite for columnar / analytical data.

It works great against local files, but my favorite DuckDB feature is that it can run queries against remote Parquet files, fetching just the ranges of bytes it needs to answer the query using HTTP range queries.

This means you can run eg a count(*) against a 15GB parquet file from your laptop and only fetch a few hundred KBs of data (if that).


Small intro, It's a relational database for analytical data primarily. It's an "in-process" database meaning you can import certain files at runtime and query them. That's how it differs primarily from regular relational systems.

for the average developer I think the killer feature is allowing you to query over whatever data you want (csv, json, parquet, even gsheets) as equals, directly from their file form - can even join across them

It has great CSV and JSON processing so I find it's almost better thought of as an Excel-like tool vs. a database. Great for running quick analysis and exploratory work. Example: I need to do some reporting mock-ups on Jira data; DuckDB sucks it all in (or queries exports in place), makes it easy to clean, filter, pivot, etc. export to CSV

If you're developing in the data space you should consider your "small data" scenarios (ex: the vast majority of our clients have < 1GB of analytical data; Snowflake, etc. is overkill). Building a DW that exists entirely in a local browser session is possible now; that's a big deal.


Is it just me or is having the AI help you self sensor (as shown in the demo live stream: https://www.youtube.com/watch?v=cfRYp0nItZ8)... pretty dystopian?

As an American living in Europe, it is fun to see the filter bubble Europeans live in with regards to how the US is framed in the media here and even in common coversation. I was talking to my French buddy about it and we were comparing our filter bubbles. Please realize that there is a filter bubble that all of us live in.


I consume a lot of American social media as a European, what about his comment was wrong?

Fox News, Twitter, Facebook etc have really done a number on the US (and the rest of the world to some extent). The lack of regulation of these companies have brought us to the Post-Truth world we're living in now.


Filter bubble or not; it's a disgrace. A fallen nation. A violent dictator. Lies, racism, sexism, violence. People vote for this!


I'm not a Trump fan but the immigration situation was getting a bit out of hand. It was a bit do you want the US to be the US you knew or have the whole of south and central America move in.


And Harris or Democrats couldn't fix this? US had to elect an actual criminal to fix the issue?


Let's vote for a dictator! That must solve it! Last time he was sooo successful.


As if Biden/Harris has set some high standars


Sure, The Dark Lord is evil, but did you notice how the Mayor of Michel Delving failed to do anything about that rabbit plague that decimated the Longbottom tobacco harvest two years ago?


Read my reasoning: https://news.ycombinator.com/item?id=42061182

I am not American but my country has suffered due to Biden policies


The Mayor of Michel Delving flat out refused to help the tobacco growers of Longbottom, letting the rabbits have their way. The Dark Lord will surely treat them better! Or at least kind of ignore them. Or, well, he does seem supportive of the rabid rabbit leader, but just as often as not he's just incoherent on the topic of the plight of the Longbottom tobacco growers, and I'll take that as a good sign!


Explain. Did he send their private army to overturn an election? Did they grab them by the pussy? I am glad I have some standards.


Atleast Biden Administration is blamed to topple our Pakistani govt.


You know, it’s interesting that you don’t seem to care what American Pakistanis think. Ham sab gaye tel lene kya? Ham bhi bhugtenge


> Ham sab gaye tel lene kya? Ham bhi bhugtenge

Pakistani Diaspora agar Genocide aur LGBT enabler say hamrdari rakhta tu unsa bara beghairat aur kanjar koi nahi, un k baghair tel ke danda dia jye ga


Yeh Adeel Mangi ko bata do


Do not know who that guy is, just googled and found out he was Biden's buddy hence irrelevant and it does not change my opinion of they supported genocide enablers.


I refused to vote for Hillary in 2016 because of the drones so please don’t lecture me about chitre urana


Tu ab kio Kamla aur Biden k uthate phir raha hay, hain??


Jab yeh han pato Trump saare Pakistani Coney Island se nikaal ke waapas bhej dega tab tum bhaunkoge


yaar rona dhona band karo aur Biden ko support karna par thori sharam karo


Munnay Muslims have already given vote to Trump. Especially Arab due to genocide support by Biden Administration

Tail tu tab lene ata jab tumhare bacha LGBT ghar may late. Shukar karo bach gayae


Leaving policies aside, on purely moral standards, it seems hard to "both sides" this.

The people that attempt this were incredibly bad faith i.e. for example equating Hillary's "[Trump] knows he’s an illegitimate president" which is calling out shady voter suppression tactics and the fact that Trump did not win the popular vote with the staunch denial of the 2020 election result by Trump to this day and the organisation of a (failed) plot to remain in power.


These situations aren't really comparable. Clinton questioned procedural issues and the popular vote outcome, while Trump's actions following the 2020 election (including his continued denial of results and attempts to overturn them) represent an unprecedented challenge to American democratic institutions. It's deeply concerning how many people continue to accept claims that have been thoroughly debunked by election officials, courts, and independent observers. This erosion of trust in democratic processes and willingness to embrace demonstrably false narratives suggests a troubling shift in American political culture. It's a fallen nation.


Dear Salty Dems, downvoting me won't help to hide Harris' incompetency and getting votes


Can you give a brief comparison of the different views? I’m European and watch/read a lot of US news sources from both sides and still think “European” about Trump.


You’re missing the political viewpoint of 45% of the country if you’re not main-lining right-wing radio and Fox News. No, stop—don’t google what they just said, imagine you just believe it and see where that takes you.

[edit] if you want root-cause for how we got here, look into media ownership laws in the US, and into the Citizens United court case (check out the 5-4 podcast for a fun/horrifying take). Single entities can own unlimited reach of media, which didn’t used to be the case, as recently as the very early 2000s IIRC, and Citizens opened up unlimited corporate spending in elections, with exactly the implications you’d expect for e.g. foreign spending on US elections. Er, I mean, the actual root cause is kinda the system of elections the slave states pushed into the constitution, if you wanna go way back, but the proximate cause of the current political landscape is that.


Note: Running out the door, here is a brief summary that hasn't been deeply considered.

======

Example: I casually commented at the gym that I thought Trump would win about a week ago and went into a very long conversation/debate with several people in the gym.

I asked each of them to show me their filter bubbles... once I had explained it to them. 2 Spaniards, 2 Andorran, 1 French guy. The general consensus seemed to be that Trump was a major step back socially and would pollute the environment, not care about climate change, etc.

Their information came from traditional news, Instagram/Tiktok and one from Reddit/HN/X. The only one that was even open to hearing my views during the conversation was the guy who read Reddit/HN/X. Everyone else had their minds made up.

Today at the gym, I had the 2nd part of this conversation with 3 of the guys that were there this morning. The general take away that the guy who read Reddit/HN/X summed up is one of: "Europe has yet to reach "peak tolerance" where it appears the US is already there." This realization was came to due to the issues of immigrants in both Spain/France who don't assimilate which was a hot button issue for them.

I kinda agree with this sentiment. I think that Europe (at least the part I'm in) doesn't seem to have a great immune system for people who abuse the system and generally punishes tall poppies both socially and economically. The US is completely different, tall poppies are celebrated and if you fail or get sick you have no safety net.

For me the difference in world views is best summed up by what people are focusing on. Climate, equality, and tolerance are the key issues I see pushed heavily in Europe... I see this as stemming from the "tall poppy" syndrome that is prevalent in both Spain/France. In the US, people care about other things and are generally focused on things that directly impact them. That is what this election was about. Less focus on perceived injustices or injustices of the past and more focus on making the future better with something different.

Is Europe a great place? For sure... but it has wildly different problems and world views than the US does. I think it is hard to appreciate that until you've bene immersed in both cultures long enough.

In general, I'd just say there is more nuance to everything than our brains can handle. My little mission has been to try and bring back nuance into the conversation. Black and white thinking is lazy. Nuance exists, find it and challenge your filter bubble.

I'm excited to see how this casual gym conversation continues.


I'm Italian and the views I have here (left/center left/center friends/relatives) is that he's a crazy nut job that's gonna hurt europe as well as limiting abortion rights and similar things. I'd say there's not much talk about economy because we don't understand shit in our own economy either, let alone a foreign one... Except that we think the rich are gonna get much richer and he's gonna cut social spending (ie, less Healthcare and similar things) and he's gonna crackdown on immigration. Not expressing our own views on it, that's just what i think it's talked about Trump here

Oh, he's also a misogynistic, convicted felon that spews lies. This is partially our view in my bubble, if I had to say it entirely I would get flagged lol


I wonder how it compares to the filter bubble with regards to North Korea. What if it really is a communist utopia?


I just bought the EU version of this in 55inch and candidly wish I got the 65. The 55 I have to run it scaled and my mac crashes daily to it.


We were an early adopter and sponsor of Tamagui. Impressive tech but massively unstable even on minor version bumps. We even had a core team member working on the project and the breakage was every version. Lost more than 6 months of time relying on it. Ultimately we killed the project because of the decision to go with Tamagui. Just an FYI for those considering this. Abstractions on unstable abstractions is a good way to hate your project.


I feel sorry for your problems, but to be honest I'm reading the discussion here to understand what others think and who exactly can buy in that sort of "too good to be true" ideal solution.

Because to me it just looks like another shiny thing that tries to be everything to everyone and ends up doing nothing good. They seem to be good salesmen, but less competent devs, at least when it comes to creating real innovation, not repacking old code/ideas.

I'm saying that because, both of their websites are so bad that they create problems in Chrome, something that is rather rare nowadays with an adblocker; the Tamagui one repeatedly eats memory and crash in tab and the One seem to have some rendering issues.

It did not investigate because I don't really care, but I'll say it doesn't inspire confidence at all and in my opinion, it is largely enough evidence to dismiss the whole thing as a toy project of semi competent devs.

In another comment, I have seen you drop the hypocrisy and be a bit mad about the support you gave for disappointing results. I think the one you are the maddest at is yourself, because you feel you should have known and to be honest, you probably should have. But we all learn everyday...


That's a mean thing to say, but shows you lack the ability to judge skill. I've been doing this for a long time at the highest level, and I love doing it so I take care.

Our websites both get very high performance scores and if you profile them neither of them does anything at idle. Tamagui site does a ton of fun animations to show off how well it works, it's possible if you're on Linux or Windows with certain drivers causing issues, mind telling me any of that info?

I'm an old school web developer, I've spent a lot of time on performance. I'm proud of the sites, they weren't easy to pull off. I checked in Chrome again just now and things feel very smooth, and I don't see memory runaway. Do you have any other extensions?


No problem I was mean first. And you are right, I don't have enough competency to judge skill in this particular field.

However, I was just relating what was happening on my computer, my experience. Out of the 30 or so tabs that were open at this moment, your site was the only one to cause problem, so I don't think it has anything to do with the particularities of my install (I do have many extensions, they just don't cause trouble on any of the many sites I browse).

Right now, I can't reproduce the issue so I can't say more nor investigate a bit with the dev tools (not that I care enough to, sorry).

I will close by saying that I do not need to be a 3-star chef to judge the quality of an expensive multi-course meal. You may want to argue that it is not the same thing or attack me personally but realistically it is much better to realize that your standards may be different and that for some people, your stuff isn't up to snuff.

I don't think your stuff is exceptionally bad but it's a relatively simple page, that loads a gazillion js/css and uses more memory than a YouTube page with buffered/playing video with fully loaded comments and sidebar. You may very well have some specific benchmark that tells you that it's fast but if it feels slow in use that's clearly a problem. Maybe you are not focusing on the right thing.

I don't think your approach is very valuable for useful products but I guess that's just my opinion, good luck for the future regardless!


I loaded tamagui.dev, onestack.dev and a random youtube video (not playing): in order, 107Mb, 48Mb, and 219Mb of memory. Lighthouse scores are good. Profiling shows near 0 main thread work at all while scrolling or doing common actions, no memory leaks I can see after moving around the pages. Loading many JS files is not bad so long as it doesn't waterfall, actually good practice to an extent, and we have no waterfalls and prefetch everything on hover.

I don't doubt you saw something, I'll keep an eye out for issues. I checked in Safari and it looks similarly fine there.


This comment single-handedly made me do an enthusiasm 180. I'd find it helpful if the developers responded to it.

Just to give a bit more meat to my comment: these experiences are really helpful when you're in the position to adopt a potential time-saving (stack-cutting/unifying) technology. I find I get a huge benefit from word-of-moth experiences from others who have tried such frameworks. Specifically compared to takes on the _idea_ of the technology.


I responded in a sibling comment.


Im sorry to hear that. Tamagui is one of the biggest projects you’ll find in terms of scope - it’s not one library, it’s over 200, and it’s targeting 4 platforms, 4 JS engines, and has a featureset that is unmatched.

So I don’t think it’s surprising if you have regressions when upgrading 200+ packages for any project, and I also don’t think you should upgrade it honestly more than once every six months.

Second point is that we also moved (and had to move) much faster than a typical project. Because we had so many packages and were solving very hard problems, we had a lot of ground to cover. We never broke any API surfaces in a minor or patch, but we weren’t afraid to improve our underlying code because if we didn’t, on a project of that scale, we’d have quickly succumb to tech debt. A good chunk of our UI kit was under the “beta” tag until recently.

That said I’ll take responsibility for it. I wonder when this was that you adopted it, I think we’ve gotten a lot better over time as things have stabilized. We stopped adding features over a year ago now, and have entirely focused on stability, performance and documentation since.

A final note is that I have never been full-time on Tamagui, it’s always been a side project of mine while I had a full time job. We have a large test suite, but again, the surface area of the project is simply massive.

Now that I am full time on One and Tamagui, I am looking forward to proving that we know what we’re doing. We’ve hired great developers and have greatly expanded testing even just in the last few months.

One is a lot simpler than Tamagui. Like… not 10x simpler, closer to 1000x simpler. I keep telling people: a universal framework is surprisingly simple compared to a universal style library + UI kit.

Editing to add one more point of context. One of the best and pickiest developers I know - Fernando Rojo - has been using Tamagui since the beginning. He even wrote his own style library before Tamagui. He recently started on a new side project that he wants to turn into a real venture down the road, and he is using Tamagui for it. It’s actually one of my most proud accomplishments. It’s an extremely strong signal imo, as someone who is known for NIH and being very picky to choose such a big dependency like Tamagui again.


Nate - We discussed the issues extensively on discord and you even did a video call with me and my team. The entire core theming structure changed multiple times and worked with Eshan (which you represented as being a great developer and contributor to Tamagui) to solve the issues… just to have them break again on a minor version for you to release takeout. Having donated over $1,000 to the project and spent a ton of time in the discord and messaged with you more than a handful of times I feel like this comment isn’t genuine but is to save face on HN.


The theme API has not changed, we did refactor the logic behind it, and yes we've had regressions. It's a complex system. But we've always followed on with a fix, and we've added many, many tests in that area.

Listen, I really am sorry that you guys had trouble, and I really appreciate the support you gave. Did you come to me when you had these regressions? If there's one thing it's that I am responsive to issues, especially with sponsors.

Ehsan is a decent developer, and I'm not sure what you mean by "to release takeout". But I don't appreciate the end of your comment, I'm being pretty frank here and admitting we had regressions.

It feels a bit more like you want to offload a failure entirely to me. I don't think it's a smart strategy to be updating hundreds of dependencies many times within a few month period. Tamagui has generally been stable and we have tons of success stories, and I get thanks literally daily from people.

Again, I don't deny that we have had regressions and instability, especially around January when we had a bug slip through that we missed for a week, and that was hard to fix due to a change that was merged after it. That was one of the worst two weeks of my life - I spent an entire period of 4 days over a weekend with 3 hours of sleep on average each night fixing that issue.

But I think the real and true story is that Tamagui has been generally stable for quite a long time, and if you compare it to the 50+ libraries you'd need to glue together to make up a similar surface area it likely would be more stable.


> Tamagui is one of the biggest projects you’ll find in terms of scope - it’s not one library, it’s over 200, and it’s targeting 4 platforms, 4 JS engines, and has a featureset that is unmatched.

Tamagui is impressive, but TBH it's sounding like this is a bug, not a feature.

> So I don’t think it’s surprising if you have regressions when upgrading 200+ packages for any project, and I also don’t think you should upgrade it honestly more than once every six months.

In my experience it gets exponentially harder to upgrade packages the longer you let them get out of date.


For the first part - what's the bug? It's certainly ambitious, and perhaps too much so, but I think we've managed to pull it off. To take more responsibility here - I think we should have waited longer to cut "1.0." One of the reasons we did was because we were already more feature-complete and generally stable than many competitors, but that's not a good enough reason.

On the second part - I think that's true if you're upgrading across many minors, or a major. But we've not released a major in nearly 2 years. For the most part you can upgrade pretty big gaps in Tamagui. And again, we're incredibly responsive to people who give us a reproduction of any issue.


I am also an early adopter of Tamagui and can say that documentation has gone much better than it was before, but still needs lots of improvements. I myself struggle with many packages we have installed with Tamagui, and at this point I don't know which one to keep and which can be removed.


We are working on documentation more actively than before. Please do reach out to me with any questions or feedback. Now that I'm full-time for the first time this is something I can actually focus on.


Thanks Nate! One immediate feedback I would like to give is to improve the configuration/setup documentation, like config/v1/v2/v3, packages like @tamagui/animations-react-native/babel-plugin/config/font-inter/react-native-media-driver etc. There is very less about these packages on the documentation or maybe not easily accessible.


Thanks - noted to the team. We're moving a lot faster now that I'm full-time and I finally have two really great devs alongside. Look forward to improving this and having people say "actually it's really good now" in a few months time.


Kinda hijacking the thread but... My hypothesis is that we will look back and see that Streetview imagery is a goldmine for AI and will be a path to being able to answer HARD questions about the real world.

The insane thing is there are only like 7 companies that actually have meaningful datasets.

I spent 1.5 years studying the geospatial space and went so far as buying a Mosaic51 and scanning the entire country of Andorra as a test before looking at buying the camera manufacture.

Ultimately I walked away from buying the company after issues with the family office I was working with... but long story short I believe streetview imagery will be a gold mine in the future.

If anyone is working in the space. Feel free to ping me, happy to chat and even make intros to the space. If you are training an AI, ping me as well. Happy to open my images up to the right person to make something "country scale" (160k images... every 3 meters with RTK labeled gnss data).


Anyone who uses a bike to get around will know the routine of streetviewing their route ahead of time to see how dangerous it is, if the bike lanes are, in fact, real, etc.

I also have a real estate project and want to work on AI analysis of local streetview to learn about the neighbourhood.

I've wanted to build something to automate this with AI but haven't had time.

I would love to chat!


I have always wondered if/when Google will start using their streetview data to improve mapping. They could (in theory) generate directions like "turn left after the green building" and find speed limits, road surface, width, and potentially even bus routes and stuff like that. They don't seem to, though. The routes they generate are always incredibly naive when it comes to actual road type, like "let's go up this single track road with a 20% gradient to save 2 minutes".

Curious whether you think this is more than just improving mapping/routing related stuff, though.


They do use streetview data to improve mapping, particularly for things like speed limits (like you mentioned) and other signage (street names, identifying which intersections have stop signs, etc.)


> "let's go up this single track road with a 20% gradient to save 2 minutes"

Not sure this is a good example : elevation data should be good enough to avoid this kind of roads. See e.g. https://sonny.4lima.de for Europe.

For your general thought, see here : https://blog.google/products/maps/google-maps-101-how-we-map...

> It all starts with imagery

> Street View and satellite imagery have long been an important part of how we’re able to identify where places are in the world—it shows us where roadways, buildings, addresses and businesses are located in a region, in addition to other important details—such as the town’s speed limits or business names.

So I guess what you're proposing is already done for several years but in more subtle ways.

Remember that Google Maps doesn't have the power of OSM. Hence the need for automation.


> Not sure this is a good example : elevation data should be good enough to avoid this kind of roads.

I'm not sure elevation data is enough, nor is speed limit. In many places, like the Cotswolds, the main route includes a 20% gradient. The difference is the road will be wide with overtaking lanes etc. and often a reduced speed limit. Then you have places like Devon where there are national speed limit (the highest) roads with such poor visibility they are best avoided unless you have no other choice. I find it's hard to know what the quality or "character" of a road will be until you actually get there, but Google seems to treat them all equally.


I did a project that attempts to generate these types of instructions: https://map2seq.schumann.pub/nllni/demo/


> generate directions like "turn left after the green building"

I don't know about the rest of the world, but in the Los Angeles metro area, Google Maps already gives directions like this. "Turn left after the Carl's Jr.", "Turn right after the Starbucks". I notice it's usually done in areas where street signs are hard to see, but there is a clear landmark for the driver e.g. the golden arches of a McDonald's.


To be clear, streetview was originally built to improve mapping. You can thank the wealth of street names and address labels on streetview.


Indeed. Google is a crawler, not only of the digital world.

"Crawling the physical web".

http://graphics.stanford.edu/projects/cityblock/


The team behind Panoramax is already applying AI analysis. See e.g. https://forum.openstreetmap.fr/t/detection-des-stationnement...

Most Panoramax discussions are in French, but you'll find links to English code.


Definitely look into cursor if you like Claude dev. It isn't apples to apples, but their chat mode is very similar.


This one? https://www.cursor.com/

Looks like it's not open source


Yeah, not open source.


I've been using AI models to do frontend design and candidly this doesn't hold a candle to just working directly with the base models.

With the base models you can ask: Hey, make me 5 designs that fit xyz requirements and it will go to work. If you already know front end design, you can take that up a level and pass it an emmet abbreviation of the rough code you want and say only use tailwind colors of sky, zinc, and blue.

I am decidedly not your audience, but if anyone on HN is thinking about how to speed up frontend dev and they are a frontend dev, I'd suggest just messing with the base models more.

Once you get the hang of it, it can be like working with a designer who's feelings don't get hurt... which I love.

"Meh, I don't like those designs, generate me 5 more and feel free to increase the model's temperature up quite a bit to see if you can come up with something cool."


I'm not really sure what you mean by "base models?"


Claude, GPT, Llama3.1 directly. The models are really capable if you prompt correctly.


They're likely referring to using the large language models (GPT, Claude, etc.) through their primary interfaces, because you get more control by interacting with the LLM directly.


Thank you for this explanation. A few things just clicked for me.


Using this pattern has been a gift. It is really freeing to write templates without a framework and worrying about SSR.

Years back I dove completely in on Svelte and even wrote my own SSG that helped kick off the Islands/partial hydration crazy... Elder.js. It was fun, but burnt me out of open source.

Today, I only pull for a template language and deal with SSR/Hydration for pages that really need it.

I want JS projects that don't break 6 months after the last build. The way you get that is by not adding more dependencies and not chasing the latest craze.


> I want JS projects that don't break 6 months after the last build

I was somewhat recently trying to get a 3 year old JS project working. I felt like I was alone in an Egyptian pyramid wondering if I had missed the funeral.


> I want JS projects that don't break 6 months after the last build. The way you get that is by not adding more dependencies and not chasing the latest craze.

I've settled on ``document.createElement()``: it's a bit rough visually, but that's pretty much the only downside (I've found the time gained with templates to be negligible in front of the overall development time).

A big bonus of dealing with genuine objects is that there's room to extend them to e.g. store additional state/data/handlers/etc., which might otherwise be cumbersome to be set via attributes & cie.


An even bigger and underappreciated bonus of dealing with objects is that you're working at the correct abstraction layer. Using templates that glue string together is doing the Wrong Thing, and this is how you get XSS (or outside of HTML, injection class vulnerabilities specific to what you're writing, e.g. SQL injection when gluing strings into queries).


By being imperative, you describe how to get to the state you desire. By being declarative, you split the concerns of “how it should be” and “how to get there”, which doesn’t come without downsides (abstractions are leaky) but does have its benefits: the latter logic you’d maintain in one place, and the former is usually easier to reason about.


I'm not talking about imperative vs. declarative, but about working at the level of language you're dealing with. Using string concatenation for building HTML or SQL is like doing arithmetic like "123" + "456" = "123456" -- you're using the wrong addition operator.


Describing the state by design implies not working at the same level as getting to that state, though—I’m not sure if there is some subtlety I’m missing—and to me there seem to be obvious benefits (and downsides, as mentioned above) to not explicitly manipulating DOM where I want to just describe how a page should be structured.


Perhaps this can help to fix the ideas: here's how you could create the HTML by manipulating the DOM (the code indentation mimics the HTML indentation):

   let p = document.createElement("span");
  
    let b = document.createElement("button");
    b.innerText = btnText;
    if (btnClass) b.classList.add(btnClass);
  
    let m = document.createElement("span");
    m.style.display = "none";
    m.classList.add(modalClass);
  
     let q = document.createElement("span");
     q.classList.add(modalContentClass);
  
      let c = document.createElement("button");
      c.innerText = "×";
      c.classList.add(modalBtnCloseClass);
  
      // r is an "external" HTMLElement
      r.classList.add(modalContentClass);
  
     q.append(c, r);
  
    m.appendChild(q);
  
   p.append(b, m);
It's not that different from writing the HTML (with or without a template). But IMO the great thing here is that, at the end of that chunk of code, you've got pointers to every node of interest. You can then register handlers, target arbitrary nodes in such handlers, without having to navigate through the HTML (e.g. by setting ``id`` or other attributes to help identify them, or by relying on a static HTML structure).

And as the parent points, this should prevent some amount of unexpected HTML injection.

There's still a separation between the view and the page structure.


I know how to manipulate DOM. You are not invalidating my point. This way involves a lot more mental overhead any time you need to define a structure for a page, compared to a declarative way of writing HTML as a string and delegating DOM manipulation to separate imperative logic (or indeed browser’s native parser).


> This way involves a lot more mental overhead any time you need to define a structure for a page,

How so?

The "imperative logic" of the previous example is parametrized by a few node pointers. Assuming I understand your approach correctly, your separate imperative logic would still need to be parametrized by some node pointers as well.

Except because you wouldn't have direct access to them, you'd need to use class names or IDs to retrieve them. Hence you would need an extra layer to join the two parts that you've separated, which in my opinion is more mental overhead: the management of those attribute comes at a cost (e.g. naming conflict, more class names to remember, confusion between class names).

Perhaps a more concrete example (bits of code) of your approach might help make your point clearer.


Why do you assume there is a requirement to access nodes directly, and why would that be a typical requirement?

(There are ways to achieve that more declaratively than by building the entire DOM imperatively, JSX and React’s refs is one approach, but I wonder why you assume direct access is always needed in the first place.)


> Why do you assume there is a requirement to access nodes directly, and why would that be a typical requirement?

I haven't made a case for it to be a requirement, but rather for direct access to be 1) trivial to achieve 2) more convenient than all of the (Vanilla JS) alternatives that I know of (attributes-based solutions).

> JSX and React’s refs

From a quick glance, I'm not sure this brings anything more than keeping a handful of node pointers around. But it should, at least in some ways, be better that attribute-based solutions.


Well, I have outlined why declarativity is good, so as far as I’m concerned the choice to go imperative and abandon the benefits of declarativity in order to make access to nodes trivial should probably be accompanied by understanding of why that access would be needed in the first place.


To perform any action on the DOM, you need access to the nodes: registering handlers, manually triggering events, setting/getting values, rebuilding them, etc.

I'd encourage you to toy with a concrete example, in case you (or I) are missing a subtle point; the previous piece of code is the "view" for a modal "component" which hosts an external HTMLElement (r). See how you'd implement it with both approaches.

With both, the "view" and the logic are still separated. The view is merely encoded differently.


> To perform any action on the DOM, you need access to the nodes

Not really. With declarative approach you can just give HTML to the browser (or some templater) and get your page completely laid out without having any access to any nodes.

In fact, declarative approach does also allow to connect handlers with ease (onclick and similar attributes are very much a thing).

Using onclick handlers doesn’t scale well if we are talking about Web apps with complex GUIs, but then the naive imperative approach of createElement scales potentially even worse. The approaches that do scale tend to favour declarative approach with isolated imperative logic (such as in event handlers).


> In fact, declarative approach does also allow to connect handlers with ease (onclick and similar attributes are very much a thing).

Yup, but it's more limited: consider the modal: you can't — without relying on attributes, or worse, the page's structure — hide the modal when clicking on the little cross, in an onclick attribute.

> The approaches that do scale tend to favour declarative approach with isolated imperative logic

I think it's key regardless of the approach: you want to break down your pages in pieces (e.g. "components"), have them eventually communicate with each others, while keeping those communications as local as possible.

Very much in the same way we design programs over small units (procedure, function, object, etc.).


Modals are generally involved in a full-blown Web app. For an average post or article, you don’t need node handles, pure declarative HTML is enough. At the same time, implementing a full-blown app imperatively directly with createElement would result in an unmaintainable mess. In other words, there seem to be very few use cases where such approach is justified.


> For an average post or article, you don’t need node handles

It goes without saying...

> At the same time, implementing a full-blown app imperatively directly with createElement would result in an unmaintainable mess

I disagree, and I've built non-trivial UIs with createElement; a key is to have the discipline to keep the "components" well-isolated.


For those looking for something more robust... this is where I'd look: https://github.com/kitajs/html.

Also Honojs (hono.dev) has a built in HTML function which is quite good and I've used extensively.


kitajs already being a fork of https://github.com/nicojs/typed-html . What makes this one more robust or stable then?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: