Hacker News new | past | comments | ask | show | jobs | submit | more Piezoid's comments login

I use the event loop to defer the (io heavy) post-processing of some requests without involving an off-process task runner like Celery. I use a custom implementation of this: https://django-simple-task.readthedocs.io/how-it-works.html It is way simpler when you don't need strong guaranties and checkpoints persistence.



Code reuse is achievable by (mis)using the preprocessor system. It is possible to build a somewhat usable API, even for intrusive data structures. (eg. the linux kernel and klib[1])

I do agree that generics are required for modern programming, but for some, the cost of complexity of modern languages (compared to C) and the importance of compatibility seem to outweigh the benefits.

[1]: http://attractivechaos.github.io/klib


I can think of many specialized applications where the versatility is superfluous while the size of the model prohibit inference on the edge.

Do you know if there is available methods for shrinking a fine-tuned derivative of such big models?

Beside generating a specialized corpora using the big model and then train a smaller model on it, is there a more direct way to reduce the matrices dimensions while optimizing for a more specific inference problem? How far can we scale down before the need of a different network topology?


You can quantize the model to 8-bit tensors instead of 16- or 32-bit bfloats. NVidia has dedicated hardware in their latest series of GPUs so that they can do inference with 8-bit quantization quickly, and it yields 1/2-1/4x of the model in memory. There are other tricks that can be used like sparse tensors, which have been applied to language models and can reduce the memory overhead 10-100x.

See also: "From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression"


As far as I am concerned, there are many ways to compress a model such as quantization, pruning, and knowledge distillation.

By the way, I found a package called BMCook when I browsed the OpenBMB repo, which implements several algorithms and also compares it with other model compression packages. Hope this can help you.


Jaimie Mantzel: A guy in Panama building atypical low tech off grid solutions and small boats https://www.youtube.com/user/JMEMantzel


This add-on is using solvespace for solving constraints.


The is a python API for it's constraints solver: https://pyslvs-ui.readthedocs.io/en/stable/python-solvespace... However the solid geometry engine is tightly coupled with the application and not exposed by a public API.


> - good for modeling mechanical parts / lousy for modeling anything organic-looking

SDF modeling is great for organic shapes.

On the surface, it feels similar to OpenSCAD since CSG operations are natural primitives (min/max/...). Fillets / chamfers are easier to produce, compared to OpenSCAD: http://mercury.sexy/hg_sdf/#snippet

Libfive is one implementation geared towards CAD work. One issue with SDFs for CAD is that it can be difficult to work on complex models. The representation is not minimal: two SDFs can represent the same volume, but act differently when you combine them with other bodies.

Libfive's "stdlib" is quite minimal. For anything fancy, you have to build your own "DOM" on top of it, in order to organize your parametric models. I have not enough experience for that, but I think that it should be possible to build a DSL that render to an SDF expression, while supporting introspection, constraint solving, AD for gradients, etc, with goals similar to CadQuery (I don't like the stack API either). This might also help with the normalization issue above.


YaCy is decentralized, but without the credit system. Some tokens, like QBUX, have tried to develop decentralized hosting infrastructure.

I also have been wondering how this would play out with some kind of decentralized indexes. The nodes could automatically cluster with other nodes of users sharing the same interests, using some notion of distances between query distributions. The caching and crawling tasks could then be distributed between neighbors.


YaCy is too slow for mainstream use. I believe the indices still need to be centralized, only index-building and crawling can be distributed.


A big part of the problem I see with decentralized search is that you basically need to traverse the index in orthogonal axes to assemble search results. First you need to search word-wise in order to get result candidates, then sort them rank-wise to get relevant results (this also hinges upon an agreed-upon ranking of domains). That's a damn hard nut to crack for a distributed system.

Crawling is also not as resource consuming as you might think. Sure you can distribute it, but there isn't a huge benefit to this.


In fact, only half of the hydrogen originates from the POWERPASTE; the rest comes from the added water.

Indeed, it seems it's an hybrid of your fellow's tech, using magnesium both as a substrate for H2 and as a reducing agent for water: Mg + 2H2O -> Mg(OH)2(aq) + H2(g)

Less explosive than Na, hopefully :)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: