I know what you're thinking: great, another MVC library for Javascript. But wait: Yehuda knows what he's doing, it will be interesting to see what Amber includes.
As I've discussed before [1], the current discussion about MVC frameworks in JavaScript is very superficial, and meaningful API differences between, e.g. Spine and Backbone, are not sufficiently understood by people who are picking between them.
Yehuda has a lot of experience thinking about framework APIs from the standpoint of modularity, performance, and developer convenience.
He is absolutely right to emphasize that calculated properties and managed attributes on the model layer are the key to easily building JavaScript interfaces. The difference between nice MVC code and poor MVC code or jQuery soup is the idea that in each method data is only flowing one direction: nowhere are you manually updating both a UI and a data object. Two-way data flows make up the majority of the code in non-MVC jQuery-oriented code. In contrast, a robust model layer with lots of events is the best way to get code reuse and composition.
I am new to web development and have just started using/learning javascript and so my knowledge is limited, to say the least. But isnt what you are saying easily achievable through backbone.js.(by calling a set on a model and triggering an event which the view catches and updates itself.)
Just to be clear -- the Backbone.js "party line" is that automatic 2-way binding between views and models is usually undesirable.
The model contains the "truth" -- the current state of the app. Views should be bound to models and automatically update when the model changes.
But when the view changes, the reverse is not the case. You don't want every checkbox click, radio button twiddle, and keystroke to be causing your model to change ... thereby emitting changes which may cause other pieces of your app to re-render, changes which may cause Ajax requests to persist the model state to the server, and so on.
Instead, most apps have a "save" button for some changes, a mouse click for others, or drag and drop, or a gesture, or an "undo" link.
It's much more convenient and flexible to allow your app to react to DOM events as it needs to, instead of having specific model attributes hard-bound to concrete DOM elements. ... that said, if the latter is your cup of tea, there's a plugin for that.
It took me a while to figure it out, but the model concept in Backbone doesn't completely correspond to the model concept in Amber.
Backbone, at least as I practiced it, has a model that very closely corresponds to what goes over the wire. You observe stuff on that, you react to events, the event observers react and update the model and as a result the model is sent over ajax.
Amber, again as I practice it, has a model that corresponds to the complete application state of a particular component of the page. If you have a checkbox that doesn't 1-1 match with something you'd send to your server, you create an attribute on your model purely to track the state of that checkbox and wire up observers to update the appropriate "real" attributes you're sending to the server. This is not immediately apparent when looking at the two frameworks but I find the difference to be crucial.
I started using backbone the day you open sourced it since I had a personal version of more or less the same thing that was twice the code and less elegant. I found that it was fantastic as long as you could do idempotent DOM updates (i.e. innerHTML) but got lost as soon as I wanted to start adding sub components that needed to be initialized and I was back to dealing with a lot of event twiddling and dom manipulation for updating the view. I believe the split in state between the DOM and application model causes this complexity which is avoided in Amber by having all client side application state in one place. I think I could apply the same ideas in Backbone but I haven't tried since switching over.
Neat. Do you have any open-source examples of this simpler "client side application state in one place" up on GitHub? I'd love to take a peek.
I think that I'd tend to agree with your interpretation -- you write:
> [In Amber] If you have a checkbox that doesn't 1-1
> match with something you'd send to your server, you
> create an attribute on your model purely to track the
> state of that checkbox ...
That sort of approach would be against the Backbone "party line". A large part of the point is to have your canonical state for a given resource in one place: the model. If you now have both "model" and a "view-model" for the same resource, one of which has been adopted to be more checkbox-y, just so that its attributes correspond more closely with the DOM ... it would be a shame.
The allAreDone propery. It's not something you'd have in the canonical data you'd send to/from the server but exists as a property so it can be observed and so the template bindings can manipulate it.
> If you now have both "model" and a "view-model" for the same resource, one of which has been adopted to be more checkbox-y, just so that its attributes correspond more closely with the DOM ... it would be a shame.
The model is more checkboxy but the checkboxy bits are initialized from the canonical bits and your sync equivalent has to strip them back off. This is annoying but less annoying than DOM twiddling.
I would like to strongly agree with your statement about idempotent DOM updates v. subcomponents. This question (re-render entire unit or manually sync model changes with individual jQuery dom changes) is one of the first things I try to think through when using JS MVC patterns. It is a huge pain when you have nested controls for nested models.
This is dead on, however I say it's a client code implementation detail that the object the view is bound to is the same one other parts of the app are bound to. Two way bindings about marshalling data out of the dom into JS land, and unfortunately developers seem to pick the wrong JS land object to store the view data in: the model. The model is best viewed as the client's best understanding of what is on the server: the canonical state of the modeled object in the world. Changing the value of inputs doesn't change the state of the object in the world until you press the save button. You do still want a canonical place for view bindings to bind to, so that different views of the same object are guaranteed to stay in sync (think the slug in an index as well as the detail view in a show action), and I say its not worth sacrificing this guarantee because we can't structure our code around binding to non-canonical representations. This suggests the edit views should be bound to something which knows how to apply itself to the model, so that upon saving, the canonical place can remain as such and updates with the new data.
In Batman we really want to make this other object a reality and have been scheming about "draft" versions of models for quite a while, but still haven't quite nailed down how to do it when things like associated objects enter into the mix.
> Changing the value of inputs doesn't change the state of the object in the world until you press the save button
This is key. If the UI is linked, checkbox by checkbox, field by field, to the model, then we've created the brittle situation where the user is, in effect, directly manipulating the "business objects". Such a scheme ends up imposing (at least) implicit design constraints when modeling the app (steering the developer away from the best or most logical decisions when they would make the rigidly linked view layer unwieldy), and similarly shackling the UX design to the model setup. Which makes for a worse app. These should be related but separate concerns; a smart user experience will require some logic, marshalling, and abstraction.
I think what you've pointed out is true. Once I learned BB, I've always felt like I had to decide between fast views with KO, or scalable logic with BB. When the modelbindings plugin came into my life, I was super excited to potentially have the best of both worlds.
It was immediately obvious to me, the downsides you suggested. In my simple case it was an append interface, not an edit interface, so I stubbornly worked around it by creating orphaned models that are bound to my user editable view, and then get added into the app focused collection once submitted.
> thereby emitting changes which may cause other pieces of your app to re-render, changes which may cause Ajax requests to persist the model state to the server, and so on.
Hmm, really that sounds like poor application design than anything else.
I mean to me in an MVC binding the View to the Model bot ways is critical. The Model should be smart enough to track changes and only persist them on specific request.
And your app code should say "hey Model, I am going to query the server now - are you saved?" or "Hey - I have this new data timestamped X, or do you have something more recent?".
... sure, it depends. But there are even more reasons than over-propagation of change events to avoid strict two-way model/view binding.
One of them is that the data often isn't the same. The view displays a computed value from the model, and the model should receive the "real" value upon a change. For example:
When the DOM element is chosen, the model's value should be "article", not "NYT Article" ... and perhaps the change should happen immediately, and perhaps the change should happen only after a "Save" button is clicked.
With concrete two-way binding, you may spend more time configuring all of your DOM elements, and configuring when your models should be considered "changed", and when they shouldn't -- than you gain by tying them together in the first place.
These are simple changes to make under Knockout.js and I assume Amber.js too. It's easy to see how to do this from the Knockout.js tutorial. A model/view binding system won't slow these kinds of customizations down.
But that situation is already solved in the MVC world; with filters, callbacks and validation.
Having just finished a pretty complex "application" of this sort (which cannibalized Knockout to get what I needed...) my view is that most stuff does not need any sort of filtering. Or if it does (and the big example I could give here would be timestamps) a lot of the filtering can be standardised.
I have found you can get the best of both worlds by building a smarter model layer. For example, if your models are inherently versioned, you can just fork off a new version and then either commit it or discard it.
Meanwhile the view gets to have really rich two-way interaction with the model, which is a big win for keeping presentation logic and model logic separate.
As an example: whenever the value of one field alters the available choices for another field, it's better to capture that logic in the model. Then you can write a view that focuses only on how to display choices, not which choices need to be displayed.
In SproutCore 1.x you have SC.NestedStore, which allows a transaction-like experience where the checkbox in your case can be wired to a property on a model instance (or controller representing the same) that is derived from this nested store. This way your changes are not to the main instances of your model, but to alternate instances in a nested transaction-like state.
I agree with the same philosophy (or the very least have the ability to control the symmetry of the binding). I have been working on a binding library that allows you to have control over the binding (one-way, two-way) and which objects can be observed. It is simply a framework for defining hooks which provide APIs for defining channels (bindings) between various objects. There are hooks for jQuery, Zepto, Backbone, and plain objects: http://bruth.github.com/synapse/docs/
Absolutely -- the bit that we're discussing is one of the areas where SC 2.0 / Amber diverges, and more importantly is the heart of Yehuda's claim that "if you’re a Backbone fan, I think you’ll love how little code you need to write with Amber".
Personally, I see no problem with two-way bindings to views. If your app is well architected it should be able to handle those changes. That said, it's not difficult to set up a buffer to store changes until you "save", if that's what's desired.
Sure, but if I only want certain pieces of the view to update when their corresponding model value changes, as opposed to re-rendering the whole view, I have to build an update method that does so. Not horrible in the scheme of things, but awful nice to not build manually. Is Amber.js essentially re-rendering views in their entirely, or is it more precise with its binding updates?
Another difference I suppose would be binding classnames and attributes to your model may require several lines of jQuery to manipulate, vs I believe in amber.js its done by naming convention and an options object (to use different class names).
i am currently developing a library [0] that separates `what you want to do` from `what should actually happen`.
so if you have a model with some changing attributes you can write your view like this:
class View extends Backbone.View
template: (api) ->
new dynamictemplate.Template schema:'html', ->
@div class:'item', ->
@text 'default'
api.bind 'type', (type) =>
@attr 'class', type # changing the class to the new type
api.bind 'content', (content) =>
@text content
@end()
render: () ->
api = {}
_.extend(api, Backbone.Events)
@model.bind 'change:type', (model, type) ->
api.trigger 'type', type
@model.bind 'change:content', (model, content) ->
api.trigger 'content', content
tpl = @template(api)
tpl.on 'end', -> # when the rendering is done
@el = tpl.jquery # fresh build dom element by jquery
$('body').append(@el) # add it to the dom
if you want to use this you have to run render only once.
now every time the model changes the event handlers in the templates get triggered and updating it too, but what happens is that jquery sets the attribute class or the text of the div.
if you don't like to write _all_ your templates as functions i'm currently writing a tool for this library where you can use an already working html file to mask the functions, so you have to write only a subset of the resulting design but getting the full output:
now if you emit the 'content' of 'api' the text of <div class='item'> gets updated by jquery. note that you don't have to write the class attribute because the tag name is already matching, but the result will have the class='item' attribute.
i really don't know if this solves the actual problem or but i really would love to see others thinking about how to solve it (or help me getting on with this lib :P).
i allready have some ideas about writing a debug tool where every html tag that you write as function gets an special border which highlights when you change the properties like setting text and attribute or by adding new tags (look at the demo where i add tags after the templates is rendered).
ps: i hate using the data-* attribute to hook models to the dom because thats first totally ugly and second the designer doesn't want to touch it (and shouldn't! because it is _not_ part of the design).
I guess I'm excited that a community figure is forking/creating/updating a new-ish Javascript framework and I should focus on AmberJS in my comment, but...
I can't help but feeling that the oh-so-confusing situation with SproutCore just got more confusing. I, like many others, tried to start using SproutCore and found it to be a poorly documented jumble of code. Fortunately, SproutCore 2.0 was going to fix a lot of that. Only now it isn't. Do we have two half-baked, related frameworks? Are they kinda ports of each other? Is SproutCore 1.x now deprecated? And how confusing will it be when SproutCore 1.x upgrades to a not-AmberJS SproutCore 2.x?
This smells of politics/investor-meddling/internal-disagreements/something to me: fresh, innovative, though-derivative take on an existing codebase is forked out of the original company and is moved under another company (Tilde). Something seems amiss.
"Fortunately, SproutCore 2.0 was going to fix a lot of that. Only now it isn't. Do we have two half-baked, related frameworks? Are they kinda ports of each other? Is SproutCore 1.x now deprecated? And how confusing will it be when SproutCore 1.x upgrades to a not-AmberJS SproutCore 2.x?"
SproutCore 2.0 becomes Amber. We'll be moving the code into the amberjs organization today. The SproutCore folks, who are now focusing on native-style applications, will be carrying the torch on the (not deprecated) SproutCore 1.x.
A little off-topic, but, how are developers staying up to date in the JavaScript community?
I've been an observer of the JavaScript eco-system and it is changing so rapidly, particularly in the MVC space (i.e. SproutCore 1.xx, SproutCore 2.xx, BackBone.js, Spine.js, Knockout.js, etc etc).
It's so easy to get stuck in analysis paralysis. Is there a discussion list or a website that can help make some decisions for my next hack project?
I'm coming from a heavy backend background and I think the JavaScript MVC space warrants a hack-day project :)
Most don't. Just keeping up with new DOM additions is difficult enough. Instead you pick a project and follow it until you run into problem area it can't solve and then move on to another.
I'm not sure why you are getting downvoted. Although I don't feel you necessarily need to share it on HN, the best way is the build some simple apps and find out for yourself what the libraries are all about.
It's not downvoted right now, so I think most people get what I'm saying. I think it's important to get feedback from the community, but I agree you don't need to show everything to HN, as most random projects won't get a lot of attention here-- and I think you've picked out my real point that no blog is a substitute for getting your hands dirty.
Is it still in beta? Are the core objects going to be built up further (to their sc1 levels) or left sparse as they are now? Is it going to get a full set of guides and object documentation?
I recently tried out SC2 (now Amber, I suppose) and I had a "Wow this is cool!" moment immediately followed by a "Wow I can't do shit" moment due to lack of documentation combined with the steep learning curve. I had to put the project on hold; I hope the framework gets some serious love in the next couple months.
Maybe it's just me, but I find this idiom incredibly gross:
var prop = model.get('prop')
It seems much more elegant to write:
var prop = model.prop()
You can do the computed properties and binding thing the same way, it's really just a small difference in the model API.
But really, what I'd love to see is more model-agnostic tools. Why do I have to buy into SproutCore or Backbone's model implementation just to use the other features?
The advantage of using accessors is that your implementation can switch between static properties and computed properties and the consumers of your API don't have to know which is which. This is called the uniform access principle and ends up being extremely useful.
For example, imagine you have an Store object that saves the tax rate:
Store = SC.Object.create({
taxRate: 0.0895
});
Since this is a static value, I don't technically have to use an accessor. But imagine my state implements a tax holiday, and so for one day per year, the tax rate is different. In Amber, this is a simple change:
Instead of trawling my codebase to references for this property and changing them to method invocations, because everything goes through get(), I know the change will be transparent. It's this transparency that leads to very maintainable web apps.
Some people object to the extra method invocation, but I think the expressiveness it gets you is worth it. As soon as proxies land in JavaScript, we'll add support for them so you can use the dot notation and still get the same functionality.
Sure, but then you have to create a function for each property, even if it's a number or a string. Those bytes add up significantly in large applications.
Additionally, specifying the property as a string allows you to implement "unknown property" handlers, which is another extremely powerful idiom.
You don't. Plenty of folks use Backbone's views and routers in isolation, without using models or collections -- especially for apps/visualizations where the data is static, and doesn't change.
Exciting! It really does make sense to rebrand SproutCore 2.0. As a (light) user of both 1.x and 2.0, I completely understand the change. So much has been simplified -- the barrier to entry has dropped considerably, and the boilerplate factor is basically gone. Yehuda and Tom have done a great job with SproutCore, and I'm excited to use (continue using) Amber in the future.
"If you played with SproutCore and liked the concepts but felt like it was too heavy, give Amber a try. And if you’re a Backbone fan, I think you’ll love how little code you need to write with Amber."
Sold! You read my mind there, Yehuda.
I don't exactly know why, but I'm excited by this. Heart beating noticeably faster excited. I'll be playing around with this on the weekend.
"Blossom will use the GPLv3 license, with commercial licensing similar to the Sencha model."
I don't have a clear idea about what's going on in the SproutCore community, but I don't think this has universal support.
One of the reasons we rebranded to Amber is that our codebase is inspired by SproutCore 1, but it is a complete rewrite, targeting a smaller file size and web-centric development.
All of the fragments of SproutCore 1.x target native-style applications with a ton of JavaScript, while Amber targets ambitious web-style applications using HTML and CSS for the presentation layer. Amber apps don't have much in common with SproutCore 1 (or "blossom") apps, so a clean naming break felt like a clarifying thing to do.
It's ironic that both sproutcore and extjs fell prey to version fragmentation in the community because they tried to do too much too fast. I'm stuck on ExtJS 3.4 waiting for the 4.x line to mature to the point where it isn't half as slow and twice as buggy as 3.x.
The biggest problem for me with any JS framework I've tried in the past is that it doesn't give an easy way to divide code into multiple code files. For me personally, I find any single file/module/class that is more than a few hundred LOC to be difficult to read. For this reason I've usually rolled my own frameworks. Does Amber.js address this?
Cappuccino's @import does this really well: it is both asynchronous and blocking. In other words, having three @imports one after the other will all load in parallel (without having to explicitly request it, even if they are separated by other lines of code), but the code itself will not execute until the imports are done:
@import "one.js" // loaded in parallel to the other two
@import "two.js" // loaded in parallel to the other two
code code code // this is run only after one.js and two.js (but not three.js) have been loaded and run
@import "three.js" // loaded in parallel to the other two
Very nice! Just a note: that seems to break some things that Ruby developers take for granted, like importing stuff conditionally, or importing a bunch of files in a loop. Or is there some other way to do that?
There are JavaScript APIs in the loader you can call directly, but they will be asynchronously loaded and executed (so you'll need to pass in a callback)
Basically the only way to get this asynchronously loading / synchronously executing behavior is to statically analyze your "root" module for dependencies and load them (and their dependencies, etc). Once they're all loaded you can begin execution and synchronously execute every module as they're imported.
FYI CommonJS (not AMD) loaders for the browser do the same thing to avoid synchronous XHRs that will cause the UI to hang.
You can still import things conditionally (its treated as a statement), the only real restriction is that the filename is expected to be a static string, so the for loop thing wouldn't work. However, the "guts" of the import statement are also provided to you as a function that you can just call with anything, so you could still have for (;blah;) objj_import(filenames[i], callback) for these kinds of cases.
Actually I'd rather not see the JS framework itself trying to solve this problem. The Rails asset pipeline (based on Sprockets) is an example of a good solution to this problem that is completely agnostic to your JS framework.
I'm serving a substantial Amber app this way and it works great. I'm taking it a step further by writing most of my code in Coffeescript, and I keep all my Handlebars templates in separate files as well. Sprockets just magically combines it all.
You could also look at YUI. YUI3 is built from the bottom up with that mentality: keep a file for each class, merge them in the build stage into one module file and serve all modules combined automatically.
Take a look at MooTools. It's built around a module/class based philosophy that makes it easy to not only configure the framework itself into what you need (the framework is built around several dozen small files), but to do the same thing to your code as well.
bpm (http://www.getbpm.org/) works with Amber and compiles multiple Javascript, SCSS and Coffeescript into a single JS and CSS file. It also has a mode where it does it in realtime for development use.
brunch (http://brunch.io) provides this for backbone.js / eco / stylus - it merges multiple source files into a single deployable js (and css) file. Or you could use spine, which has this out of the box.
Good news. I wonder though, what will happen to Sproutcore? My former company build a huge application based on 1.4 (at that time), and now they're kinda left in the rain. Does anybody know when or if all the Sproutcore 1.x changes that Apple made for iCloud/MobileMe make it back into SC 1.x?
There are three contingents right now. Facebook (SC2/Amber) wants to make rich web pages. Tilde (SC 1.7+) wants to continue the Strobe direction of deploying client side web applications in different contexts. Erich Ocean (1.4 fork) wants to go back to his original philosophy of creating applications that can be deployed on the natively or on the web. The primary break Erich Ocean wants to make is to drop templates and construct interfaces in the desktop / Cocoa style, not a mixed HTML templating one.
So there are still core folks working on 1.x, but in different contexts.
It is, but it's being fixed. I expect the situation to be __very__ different by JSConf 2012.
It's been clear where SproutCore has need to go since 2009, but there hasn't been the support to do that, due to the various personalities involved. Those people have moved on, and now SproutCore can do what it's good at: building fast, desktop-style applications that run in a web browser.
Things haven't looked as positive for SproutCore in two years. I pumped.
If the people developing Amber are reading this and I'm sure they are I hope they take away the message that the community wants better documentation, examples, tutorials, and use cases. It's not features, abstraction X, Y, or Z, it's support!
I tried using SC2 (aka Amber), while it has a lot of potential and I love the powerful binding mechanism it has, it simply lacked documentation and support. I hope Amber has tackled this issue.
Why anybody thought it was a good idea to replicate desktop applications verbatim on the web i dont know, the era of composing GUI's from complex fully featured widgets is dying.
Declarative presentation in HTML/CSS coupled to JS (or better CS) is the right layer of abstraction for building GUI's for most applications in the foreseeable future.
Amber.js looks very interesting, i hope it paves a new wave of client frameworks that concentrate on state sync between client/server (and multiple clients) with some simple binding to presentation.
HTML was made with documents in mind. Ever since the web's inception HTML has been pushed beyond its original design specs and now people are writing applications in it (actually they have been since CGI). That interfaces continue to evolve and even replicate "native" interfaces shouldn't be a surprise.
As far as the "right" abstraction on the web... this too is evolving and highly dependent on your application. Are you curating a collection of hyperlinked documents? Or are you writing a drawing application?
If your answer is closer to the latter than the former, then the analogy you're looking for is something more like Postscript, where HTML and Javascript serve as a display language for an output device (a browser), driven by an underlying application.
Can someone explain how far along this is on the spectrum of creating a constraint-based UI toolkit like OpenLaszlo or (I think) Flex? Constraint based programming is amazing for the view tier, and I've been waiting to see if someone comes along and builds one comparable to the Flash based ones that have been around for years.
Will it be possible to incorporate something like this into Node.js? I know it can be done with something like Backbone, and does it provide any distinct advantages over Backbone that would be worth investing the time in this library?
A few months ago I showed a demo at a meetup of using Amber in node. The main benefit of Amber in a node environment is the way it abstracts asynchronous behavior through bindings and observers.
I could easily imagine an Amber object representing a File in node, for instance, allowing other objects to bind to properties like mtime, etc.
In my demo, I used socket.io to update an HTML page whenever one of those properties changed.
Is there a way to use SproutCore without the command line programs like sc-init or sc-server, and without it forcing a directory structure on you?
I like the idea of SproutCore, but I really just want to do <script src="sproutcore.js"> and have it get out of my way from there. YUI seems to be able to do this and it seems even larger than SproutCore, recently including MVC in its "App Framework". Of course, you need to bring your own templating, but even then that's just another JavaScript include.
Or am I missing the point of SproutCore? What is it adding that, say, YUI or Dojo isn't, that it requires so many programs?
It seems like new JavaScript frameworks are popping up every week.
I think Google with GWT and Eclipse less so with RAP had the right idea. Come up with technologies that abstract away the need to touch JavaScript directly. Just Java (or I suppose C# could work) on the client and the server means the developers are more in sync.
I'm more on the back-end so my perspective is more from the side lines, please don't read too much into it.
I couldn't help but think about AngularJS while reading this article. (http://angularjs.org/)
I am not sure why it isn't as popular as other JavaScript frameworks like Backbone.
AngularJS is ridiculously good so I'm not sure either.
For those who are unfamiliar: Angular examples are about 1/4 the size of backbone equivalents. I'm working on an angular version of the most recent peepcode backbone cast. It's about 1/5 the code size, and available here: https://github.com/ludicast/angular-peepcode-todo
Both Amber and Backbone fall into what I call the KVO frameworks category (Angular, Javascript MVC, Knockout, Batman, others). Both use jQuery for DOM manipulation and revolve around the idea of having a canonical model and having that as the center of your app. As you change the model, it fires events that you use to update the DOM. The difference is that Backbone is philosophy.
Backbone provides a basic but complete set of tools for observing model changes combined with a set of utilities that are useful but not prescriptive to make wiring up DOM events, URL handling, etc straightforward. It works with what's in jQuery/the browser. All together, it provides the organization most js apps are sorely lacking but doesn't dramatically reduce LoC over what you could do with well factored jQuery and underscore. It's nice enough that the YUI team basically adopted it wholesale for the App framework in 3.5.
Amber is all about binding. You can not only observe properties but bind them to other properties bidirectionally so that changing one causes the change to propagate through all bound values including things like values being arrays. This extends to the Handlebars templating (which was written for Amber) where you can bidirectionally bind, for example, a boolean on your model to a checkbox and to a class on a div so that checking the checkbox toggles the class without anything in your app directly manipulating the DOM. The implementation is an attribute plus two object path strings in the template. It's not panacea but since DOM/Event/State manipulation is ~60-70% of most JS apps I've worked with, you can achieve dramatic code size reductions.
The cost is that Amber is 10x the size of Backbone, the templating language is tied to the framework, and there's more overhead in understanding the concepts, how they fit together, and how to apply them to your code. When I talk to people who are writing jQuery spaghetti, I steer them towards Backbone for the simplicity and superior docs but mention that I use Amber for my own projects.
As I've discussed before [1], the current discussion about MVC frameworks in JavaScript is very superficial, and meaningful API differences between, e.g. Spine and Backbone, are not sufficiently understood by people who are picking between them.
Yehuda has a lot of experience thinking about framework APIs from the standpoint of modularity, performance, and developer convenience.
He is absolutely right to emphasize that calculated properties and managed attributes on the model layer are the key to easily building JavaScript interfaces. The difference between nice MVC code and poor MVC code or jQuery soup is the idea that in each method data is only flowing one direction: nowhere are you manually updating both a UI and a data object. Two-way data flows make up the majority of the code in non-MVC jQuery-oriented code. In contrast, a robust model layer with lots of events is the best way to get code reuse and composition.
[1] http://news.ycombinator.com/item?id=3248552