Hacker News new | past | comments | ask | show | jobs | submit login
Toyorm – ORM for Go (github.com/bigpigeon)
68 points by bigpigeon on April 2, 2018 | hide | past | favorite | 34 comments



I've come to the conclusion --- maybe wrong? --- that Go doesn't want you to do ORM this way. For awhile I thought Go didn't want you to do ORM at all (which would be problematic for me, because I think writing SQL is a huge waste of time). But now I think it just doesn't want you to use ORMs that look like Django or ActiveRecord.

Instead of defining and marking up structs and setting up hooks and configuring registries of objects and stuff, just give up on dynamic ORM and switch to codegen.

I wrote a codegen ORM in a weekend that turned out much more pleasant to use than anything like gorm, not least because everything was just plain-ol'-Golang-code. All I really needed to write was the minimal Go code to dump a schema from Postgres, and then a bunch of text/template templates for all the functions I wanted. I got associations in just a couple functions.

I would have published, but it looks like 100 people had this thought before I did. sqlboiler seems like the most mature (I didn't look that carefully). But you could seriously just write your own. If I had to do it again, I'd probably codegen something on top of Squirrel.


just give up on dynamic ORM and switch to codegen

Fun historical tidbit time!

Disclaimer first: I did not work at the Journal-World (company where Django was originally developed) at the time this story happened. I only started working there 6-ish months after the first public release of Django, but I was told the story by the folks who were there earlier. If I've messed up a detail, that's my memory being faulty.

Anyway. Back when Django was still an internal tool being split out from the application it was used to build, it had... a code generator for an ORM. You would write a description of your models, with some configuration, and run a script, which would generate a module of code for you, containing model class, query functions, etc. all of which had helpful comments at the top of the file reminding you they were machine-generated and should not be edited by hand (instead, edit the input and re-run the generator). Then you'd import from those files to use the ORM.

They showed this around, quietly, to some prominent Python people, who apparently suggested going with not-a-code-generator. But what actually ended up happening was that the first public announcement of Django, and the subsequent 0.90 and 0.91 releases, still had the code generator, cleverly disguised. Instead of a script you explicitly invoked to generate files of code, the ORM gathered up all the model definitions from all the installed Django apps, introspected those model definitions to construct all the relevant query code on the fly, packed that query code, along with the model classes, into in-memory Python module objects, and hacked them into sys.modules to make them importable.

This was the "magic" of the Django ORM, and is why no matter where you actually defined your model classes, you always imported them from somewhere under django.models, which was the location the ORM would hack its generated-on-the-fly code into.

Django 0.95 was the release that finally properly rewrote the ORM, and that effort was the source of the "magic-removal" moniker attached to the branch that eventually became 0.95.


I definitely get the impression that codegen is considered an important technique by the Go designers, but is not necessarily picked up on by Go adoptors, because it's not as common in other languages.


Agree 100%.

It's so easy to refactor Go and adding codegen on top is a piece of cake.

Like anything, it can be overused- but it's far better than reflect & co.

I'm glad to hear someone else shares my thoughts on SQL... there are things that are hard to express in ORM-speak but for the basic CRUD operations I much prefer writing code to make the SQL instead of spending time writing the same SQL templates over and over.


yes, codegen have better performance than reflect

but I think text/template to generate code is not really good

it hard to read and error-prone

it is more like a powerful c-macro

I think use go/ast to implement c++ likes template is other way


Whatever works! I just wrote the functions I wanted normally, tested them, then converted them to templates.


As far as Go ORMs go, there isn’t a decent one that handles what I consider a killer feature of ORMs - relationships. I think sqlx is good enough for most, but I once inherited a project that the management wanted to rewrite off of Scala to Go. Some of the relationships were so complex that hand writing complex joins would have been challenging. Gorm came closest, but it’s “eager fetching” features executes n+1 queries rather than joins.


Yes yes and yes. Unless the usecase is truly as simple as CRUD with a single model (in which case I reach for gorm), The easiest road often seems to be just using sqlx and hand-writing the inserts/updates.


What about 90% of cases being simple CRUD and 10% you just do the usual "raw SQL query"?

I've chosen this path of using only sqlx on my current work project (only me) and regret it, because so much typing..


I also work alone on a Go service and have some complex models. I used Gorm and then for complex selects broke out the raw sql. It works okay, I’m still having to bolt on things like OCC and Versioning to my models. I’m glad I used it though- it gave me a solid foundation to work off and replace the bits that don’t quite work well.


I use Masterminds' squirrel library to build up queries which cuts down on the amount of hand written SQL I need.


toyorm support half hand-writing you can bind the model and use its field in sql e.g https://github.com/bigpigeon/toyorm/blob/master/examples/sim...


In that case, just copy the SQLAlchemy design.

It has a functional API and an ORM API, and have a very rich set of join strategies available.


Pop has some support for associations:

https://github.com/markbates/pop


Appears unmaintained? "This repository has been archived by the owner. It is now read-only."




> I think sqlx is good enough for most

sqlx is nice but I wish it had query logging.


It's relatively easy to write a wrapper around your sqlx DB that logs queries before sending them to the underlying sqlx DB. I agree it would be nice to be built in, but then you're kind of locked into the logging interface(s) sqlx supports.


now toyorm support join operation


you means use join to query and output data as table? otherwise I think “eager fetching” is better way


Lets say I have a relationship where a User has one Profile.

    type User Struct { Id int ProfileId int Profile Profile }
    type Profile Struct { Id int }
For a load like this:

    var u User
    db.Model(&User{}).Preload("Profile").Where("id = ?", 7).Take(&u)
Gorm will do something like this:

    SELECT * FROM users WHERE id=7;
    SELECT * FROM profiles WHERE id=7;
Where a join would be more desirable (only involves 1 trip to the db):

    SELECT * FROM users JOIN profiles ON profiles.id=users.profileid WHERE users.id=7
This also has a huge effect on when your initial queries aren't restricted by ID. Gorm will do huge "id IN (1,2,3,4)" queries - or sometimes if I need to filter on a relation (i.e. get users where profile.foo = bar), I end up doing the join getting that data anyways, then the eager queries loads that data again.


In "OneToOne" and "BelongTo" Join is match but in "ManyToMany" and "OneToMany" will generate duplicate data

I think add Join method and only support "One Data" preload is good way


Off-topic: any reason why so many Go repositories have a flat structure? Wouldn't it be nicer to organize all the *.go files?


Part of is is probably because subfolders are subpackages and relative imports don't work and absolute imports have other issues - like causing issues with forking.


> because subfolders are subpackages and relative imports don't work

Really ? Is that a bug, or by design ? If the later, what's the rationnal about it ?


IIRC the tooling is not smart enough to resolve them when you import a subpackage directly.

I was working on a go project and I had to convert all of the relative imports to absolute imports for some reason, but the exact cause escapes my memory.


Import paths are nicer with a flat structure. E.g. the difference between import "github.com/whomever/orm" and import "github.com/whomever/orm/fiz/bang/whiz/resolvers".


It seems like it would be easier to talk to dbs if we had a way to work directly with tables in Go, but it's not clear how to do that without resorting to the empty interface. R is great at dealing with tables so maybe it could be looked to as an example.


It's going to be hard to unseat GORM gorm.io/docs/query.html


when you want to make complex sql,toyorm maybe word better

find the name = "tom" user and it subquery blog title = "first blog" e.g

  brick := toy.Model(&User{}).Where("=",Offsetof(User{}.Name),"tom").
			Preload(Offsetof(User{}.Blog)).Where("=", Offsetof(Blog{}.Title), "first blog").Enter()

  brick.Find(&user)
  // raw sql
  // select id,name,data, from user where name = "tom" limit 1
  // select id,title,content,user_id from blog where user_id = @user.id and title = "first blog"
toyorm select field with Offsetof, it better when you want to refactor struct field name and you can operation main query as sub query


Why would I use this vs. something like Gorp? How is Toyorm differentiated?


preload: data association exec/query template exec: custom exec/query this is my orm great features


toyorm v0.3.1-alpha add join query




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: