I thought of that (it's rather easy to do in Firefox), but the content would be lost in the noise of all the little search forms and automated XHR requests. I think it makes more sense to write scraper to download your data from the website after its submitted, which is essentially what the Locker Project[1] is trying to do.
Using html classes, ids and other identification methods I'd think it would be fairly easy to set up intelligent filtering that differentiates between different types of input fields. Discriminating between input and textarea probably gets you already 90% of the way.
Oh, no, I just meant that most forms with a password field are either log on or sign up forms, which I wouldn't want to bother with preserving locally.