Hacker News new | past | comments | ask | show | jobs | submit | more cryptonector's comments login

I think what you mean is that if a Square that is also a Rectangle can't be made to be non-square, then inheritance works. Which, fair enough, but I think there's still other good reasons that inheritance is a bad approach. Interfaces (and traits) are still way better.

IMO managing revisions makes more sense than managing patches. Yes yes, I know about commutative patches and all that, and no I don't need that. I do reorder patches sometimes, and when they yield the same tree that's interesting, but not that interesting.

You need both. Even with Git (a snapshot-based system), some commands take a hash as a revision and some take a hash as a change (e.g. checkout/rebase/reset/bisect vs cherry-pick/revert/amend). You'll hear both in common parlance too ("this commit doesn't pass tests"/"this commit is deployed"/"rebase onto this commit" vs "I did the last commit"/"this commit is too big"/"that commit broke X").

You really have to think of commits as both, neither view is entirely sufficient.


I only think of commit hashes. Branches and tags and any refs are just names for commit hashes.

We were never talking about branches and tags. I think possibly you don't know what "revision" means?

A commit hash identifies both a patch or a revision, that's my point.


> A commit hash identifies both a patch or a revision, that's my point.

Strictly speaking, no. A commit hash identifies a tree. There is no patch stored, nor is a patch bound into the commit hash in any way. The patch is indirectly implied as the differences between the new commit's tree hash and the preceding commit's tree hash.


Civil asset forfeiture should not be considered constitutional, and some day a test case will make it to the SCOTUS. As for this case though... the pardon does not make Ulbritch innocent! On the contrary, accepting the pardon implies guilt. So the pardon need not and might not extend to forfeitures. Though it's also possible that the presidential pardon could extend to the forfeitures, but I suspect that's a constitutional grey area.

Cases have made it to the Supreme Court --- recently! --- and it held up just fine. This is another message board fixation. I'm sure it's abused all over the place. It wasn't in this case.

If the cases start with: "US v $200,000", that probably needs to go away.

I doubt I would be able to get away with "bb88 v $2B". It should so belong to me.



What part of this makes you thing CAF is on shaky constitutional ground? This is a CAF case with reach-y fact patterns for the government and they won it handily. We didn't even get close to the question of whether CAF is itself constitutional; the court simply presumed it.

This was about the timing of a hearing about forfeiture, not about whether forfeiture is ok. Though I've not read this case yet, but now that I'm aware of it I'm keen to read it. I'll comment again later.

Civil asset forfeiture connected to an actual crime should be. You should not own the guns you used to murder someone else, e.g.

Otherwise it's "your $100,000 in dollars in cash looks guilty to me."


Right, if you've been found guilty, the your assets can be forfeited.

  All persons born or naturalized in
  the United States, and subject to
  the jurisdiction thereof, are
  citizens of the United States and
  of the State wherein they reside.
The SCOTUS long ago held that "and subject to the jurisdiction thereof" does mean something. For example it means that children of diplomats and tourists are not U.S. citizens because those diplomats and tourists are not "subject to the jurisdiction thereof". There were debates in the Congress at the time about this.

Whether that applies to other cases is yet to be seen. But let's not pretend like it's obviously clear that the 14A means birthright citizenship no matter the circumstances, or that there aren't controversies here for the courts to settle.


For the downvoters, I was describing the setup, not what must be decided. This might help: https://www.youtube.com/watch?v=3EvBv53XBNg

That was very interesting! Thanks for sharing!

That's an argument Trump is betting on. I find it very disappointing that his opponents shout that it's clearly unconstitutional without addressing the point.

Reasonable way to approach it would be to try to understand what the authors meant back then. Maybe they meant something completely different but why not address the obvious point the other side is making?

I also think granting citizenship to children of people who are in a country illegally is silly and I find it very disappointing that Trump's opponents are so ideologically driven and frankly blind to popular sentiment that they can't even admit it. They make it sound like an attack on human rights or some racist policy while the whole thing is ridiculous. It's rewarding people for breaking the law - the worse kind of policy you can come up with. There is a reason very few countries have this kind of rule.


Eh, only one hard thing then, because as hard as Unicode is timezones is way harder.

And the answer is to use `gmtime()`, which AIX doesn't have and which Windows calls something else, but, whatever, if you need to support AIX you can use an open source library.

AIX has gmtime [0], too. Since at least 7.1.

[0] https://www.ibm.com/docs/en/aix/7.1?topic=c-ctime-localtime-...


Adding or subtracting "months" is inherently difficult because months don't have set lengths, varying from 28 through 31 days. Thus adding one month to May 31 is weird: should that be June 30 or July 1 or some other date?

Try not to have to do this sort of thing. You might have to though, and then you'll have to figure out what adding months means for your app.


Welcome to Business Logic. This is where I'd really like pushback to result in things that aren't edgecases.

However you also run into day to day business issues like:

* What if it's now a Holiday and things are closed?

* What if it's some commonly busy time like winter break? (Not quite a single holiday)

* What if a disaster of somekind (even just a burst waterpipe) halts operations in an unplanned way?

Usually flexability needs to be built in. It can be fine to 'target' +3 months, but specify it as something like +3m(-0d:+2w) (so, add '3 months' ignoring the day of month, clamp dom to a valid value, allow 0 days before or 14 days after),


Do all edge cases need to be handled? Just be late when there's a holiday.

72 business hours sounds more like human time than computer time anyways.


Yes, basically, they do need to be handled, but you have to define that for your own case. It's a real pain, if you have to do month math.

You're missing out if you don't know about PostgREST.

The proprietary test suite for SQLite3 is much much larger still. The battle-testedness comes in great part from that.

Is that where the 10x more lines came from? Writing more "testable" code?

Oh no, SQLite3 is a lot more featureful than SQLite2. The proprietary test suite is what makes SQLite3 so solid.

This makes me wonder. Is anyone practicing TDD with genAI/LLMs? If the true value is in the tests, might as well write those and have the AI slop be the codebase itself. TDD is often criticized for being slow. I'd seriously like to compare one vs the other today. I've also heard people find it challenging to get it to write good tests.

I'd sort of invert that and say it's better to use LLMs to just generate tons more test cases for the SQL DBs. Theoretically we could use LLMs to create 100s of Thousands (unlimited really) of test cases for any SQL system, where you could pretty much certify the entire SQL capability. Maybe such a standardized test suite already exists, but it was probably written by humans.

At that point, you'd get a ton more value from doing Property Testing (+ get up and running faster, with less costs).

If I'd had to have either code or tests generated by a LLM, I'd manually write the test cases with a well-thought out API for whatever I test, then have the LLM write tests that implements what I thought up, rather than the opposite which sounds like a slow and painful death.


I hadn't heard of "Property Testing" if that's a sort of term of art. I'll look into it. Anyway, yeah in TDD it's hard to say which part deserves more human scrutiny the tests or the implementations.

Are you sure that LLMs, because of their probabilistic nature, would not bias against certain edge cases. Sure, LLMs can be used to great effect to write many tests for normal usage patterns, which is valuable for sure. But I'd still prefer my edge cases handled by humans where possible.

I'm not sure if LLMs would do better or worse at edge cases, but I agree humans WOULD need to study the edge case tests, like you said. Very good point. Interestingly though LLMs might help identify more edge cases us humans didn't see.

TDD suffers from being inflexible when you don't fully understand the problem. Which on software is basically always.

Everytime I've tried it for something I make no progress at all compared to just banginf out the shape that works and then writing tests to interrogate my own design.


Happy that it's not just me. I tried it a couple of times, and for small problems, I could make it work, albeit with refactorings both to the code and tests.

But for more complicated topics, I never fully grasped all the details before writing code, so my tests missed aspects and I had to refactor both code and tests.

I kinda like the idea more than the reality of TDD.


TDD is supposed to teach you that refactoring of both the code and tests are "normal": iow, get used to constant, smallish refactors, because that's what you should be doing.

Now, the issue with badly defined problems is not that it's just badly defined, it's also that we like to focus on the technical implementation specifics. To do TDD from scratch requires a mindset shift to think about actual user value (what are you trying to achieve), and then go for the minimum from that perspective. It's basically an inverse from common architecture approach, which is design data models first, and start implementing next. With TDD, you evolve your data models along with the code and architecture.

And it is freaking hard to stop yourself from thinking too far ahead and letting tests drive your architecture (code structure and APIs). Which is why I also frequently prototype without TDD, and then massage those prototypes into fully testable code that could have been produced with TDD.


I think in general people tend to overdo TDD if they do TDD, aiming for a 100% test coverage which just ends up doing what you and parent mentions, solidifies a design and makes it harder to change.

If instead every test is well intentioned and focus on testing the public API of whatever you test, not making assumptions about the internal design, you can get well tested code that is also easy to change (assuming the public interface is still OK).


It's extremely hard to really do TDD and get code that's hard to change. If you persevere with a design that's hard to change, every single change in your failing-test-fix-implementation TDD cycle will make you refactor all your tests, and you'll realise why the design is bad and reduce coupling instead.

What really happens is that people write code, write non-unit "unit" tests for 100% coverage, and then suffer because those non-unit tests are now dependent on more than just what you are trying to test, all of them have some duplication because of it, and any tiny change is now blocked by tests.


You can get 100% coverage by focusing on testing the public API too. These two things are completely orthogonal.

dude, if you have the llm write the tests, then you have no confidence it's testing what you think it is. Making the test worthless

Dude, I was suggesting that you, not the LLM, write the tests in this scenario.

Re: copy-on-write (CoW) B-tree vs append-only log + non-CoW B-tree, why not both?

I.e., just write one file (or several) as a B-tree + a log, appending to log, and once in a while merging log entries into the B-tree in a CoW manner. Essentially that's what ZFS does, except it's optional when it really shouldn't be. The whole point of the log is to amortize the cost of the copy-on-write B-tree updates because CoW B-tree updates incur a great deal of write magnification due to having to write all new interior blocks for all leaf node writes. If you wait to accumulate a bunch of transactions then when you finally merge them into the tree you will be able to share many new interior nodes for all those leaf nodes. So just make the log a first-class part of the database.

Also, the log can include small indices of log entries since the last B-tree merge, and then you can accumulate even more transactions in the log before having to merge into the B-tree, thus further amortizing all that write magnification. This approaches an LSM, but with a B-tree at the oldest layer.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: