This feels like a good place to get some opinions out there about an app I am working on.
Some team members are advocating a very dogmatic approach to our eventually public facing REST API design where each REST end point of /foo results in a response of a list of some ids; each url can then be hit to fully populate the list.
eg
/foo returns something like [1,2,3,4]
I then have to do 4 more http requests to /foo/1, /foo/2, /foo/3 and /foo/4.
Further, if a /foo/1 model contains for example a user, then /foo/1 would additionally contain userid of eg 101.
I would then have to do a /user/101 and ensure the foo model was populated.
This can mean a simple list page with 10 on screen list items can easily create 1 + 10 + 10 = 21 http requests at a minimum. This is starting to add a lot of code complexity to managing the lists for sorting/filtering/updating purposes and managing the async requests in our AngularJS application.
The way I have worked before is to build a single end point of /foo with a paginator offset, and then that would return the entire object in an HTTP request, eg
[
{fooid : 1, user : {id : 101, name 'bob'}, orders : [{id: A},{id : B}]},
{fooid : 2, user : {id : 102, name 'alison'}, orders : [{id: C},{id : D}]}
]
This approach is less RESTful, but presents fewer issues when writing the API consuming code and is much more responsive from a latency point of view.
What does HN think about these contrasting approaches?
I don't see why would the latter be any less RESTful than the former; as long as the "foos" still have their canonical URL, I don't see any constraint being bent or broken by having their representations be sent in another request/response.
In fact, I'd say the latter approach is actually more RESTful, since the former requires the client to build URLs, which breaks if the server changes them.
You could add a get parameter defining which resource you want to get (no values meaning just getting the id/href) and you put something like ?expand=user,orders if you want to get them in one query
21 requests is indeed too much to display 10 items I think
Personally I don't see why your approach makes it less RESTful. In fact, doing 21 requests instead of 1 seems like an extremely chatty application and reminds me of SELECT N+1 database problems.
Some team members are advocating a very dogmatic approach to our eventually public facing REST API design where each REST end point of /foo results in a response of a list of some ids; each url can then be hit to fully populate the list.
eg
/foo returns something like [1,2,3,4]
I then have to do 4 more http requests to /foo/1, /foo/2, /foo/3 and /foo/4.
Further, if a /foo/1 model contains for example a user, then /foo/1 would additionally contain userid of eg 101.
I would then have to do a /user/101 and ensure the foo model was populated.
This can mean a simple list page with 10 on screen list items can easily create 1 + 10 + 10 = 21 http requests at a minimum. This is starting to add a lot of code complexity to managing the lists for sorting/filtering/updating purposes and managing the async requests in our AngularJS application.
The way I have worked before is to build a single end point of /foo with a paginator offset, and then that would return the entire object in an HTTP request, eg
[ {fooid : 1, user : {id : 101, name 'bob'}, orders : [{id: A},{id : B}]}, {fooid : 2, user : {id : 102, name 'alison'}, orders : [{id: C},{id : D}]} ]
This approach is less RESTful, but presents fewer issues when writing the API consuming code and is much more responsive from a latency point of view.
What does HN think about these contrasting approaches?