I can't help but wonder how much of the new web metaphors revolve around sharing state and replicating GUI features that were largely solved in the mid 1980s. I just spent days implementing some JWT and OAuth code that largely reproduces what session cookies did in the 90s (while fixing security issues, I realize that). I've spent weeks implementing AngularJS code that basically does what a software transactional memory (STM) does, only with ugly syntax. I've spent my career learning how to get an environment set up and transpile code (on many platforms, not just the web). I may be getting very proficient at these things, but in the pit of my stomach I know it's all going the way of the dodo and that I am a glorified punchcard operator, because I've been through code deprecation before and know it's inevitable.
I think the solution to all of this is to get back to declarative programming and idempotence, which will largely eliminate the trap of front end scripting we've fallen into. We need to switch from the client/server metaphor to trusted and untrusted computing and move to a p2p topology. This would be better served by the stdin/stdout/stderr stream processing of UNIX where each component does one thing well without side effects in a hierarchy of concerns where language is largely irrelevant and we focus more on the data, probably JSON streams processed with jq or something like it. Views would just be thin wrappers over black boxes handling business logic. Right now all of the glory is going to top-down development and processes like Agile/Scrum, but if we look to history, most of the interesting innovations happened in bottom-up egalitarian R&D. Now get off my lawn..
The Web must not stand still. It is the only open platform we have. The web (and browsers) becoming a relic from the past is too great a risk for humanity.
Keep pushing the Web forward till the next What's App or Snapchat is a Web app. Till we have decentralized app hosting and file sharing. Till we have native execution speed. Till we can distribute or install fully functional applications outside the arbitrariness of Apple and Google store policies.
Web apps do not intrinsically force users to upload anything.
Client-side web apps can operate entirely offline and store data only on the client. They can also be configured for app-like operation by including a Web App Manifest [1], created specifically for this purpose. Until the Web App Manifest gets widespread platform support (which may not ever happen since both Apple and Google are heavily incentivized to keep app developers locked-in to their ecosystems), you can use manifoldJS [2] to polyfill for platforms that do not support it using Cordova.
EDIT: So apparently Chrome for Android already supports the Web App Manifest [3], but to distribute for the Play Store you still need a .apk package, which is probably why manifoldJS still polyfills it. In contrast, Firefox OS allows submitting apps to the Market from your web app's URL directly, as long as it has a manifest.
all the personal info you send just connecting to a "web" or "mobile" app, is of utmost importance to the company hosting the web site/app. "operating" offline just means that data is uploaded when a user in "online" or it's done in the background.
> "operating" offline just means that data is uploaded when a user in "online" or it's done in the background.
By "operating offline", I was referring to purely client-side apps that, once their static resources have been downloaded, can run completely offline without communicating to any server, ever.
Most mobile apps have a web interface and they can communicate with servers on a regular basis making them not that different from their web counterparts.
I've said it before and I'll say it again: applications that require a specific machine or system to run, make about as much sense as image, audio or video files with the same limitation.
Does that mean that everything should be a web application? No, but it's the most developed vendor neutral, machine/system independent runtime available. So progress being made there should influence progress in future runtimes of it's like.
The problem, imho, is that we're trying to cram too many features into a single layer of abstraction.
That web-browser, running a complicated mix of javascript, CSS, DOM rendering, is too big for a single component that is made by a single organization. Its monolithic design is hindering progress and it is putting security at risk as well.
Instead, why not have the browser run a very simple instruction set (even without garbage collector) to which many scripting languages could compile. Because of its simplicity, security would be very simple to enforce. Imagine the proliferation of programming tools that would naturally occur.
It seems the designers of the web (W3C et al.) are focusing on the wrong, short term, goals of making end-users happy rather than developers.
You're underestimating the value provided by HTML: accessibility (everything from screen readers to increasing text size and contrast), hyperlinks (ad-hoc integration), malleability (adblocking is a social good, but it's not just adblocking - there are enormous amounts of tweaks and browser extensions that make the web more useful and easier to use for people).
If you create an environment that doesn't focus on making end-users happy, it will lose in competition with one that does. I know I'd fight for the users, despite working on a very complex single page app for a living.
> It seems the designers of the web (W3C et al.) are focusing on the wrong, short term, goals of making end-users happy rather than developers.
Since people are responding that users are more important than developers - I think there's another piece of insight hidden here. The Web was originally designed around users being the developers. Or, publishers, which was an equivalent concept.
Given that all of this added complexity is used mostly for the purpose of selling people trinkets they don't need (or selling their data to someone else), I'm still not sure this was a good development.
> Instead, why not have the browser run a very simple instruction set (even without garbage collector) to which many scripting languages could compile. Because of its simplicity, security would be very simple to enforce. Imagine the proliferation of programming tools that would naturally occur.
You can even encourage sites to use Web Assembly and assorted Web APIs in lieu of HTML and CSS if you want.
Now if you want to go further and deprecate and remove HTML and CSS, that won't happen. Even if we were to see a market shift toward "canvas + WebAssembly", leaving HTML and CSS unused (which I think is very unlikely), there's too much content out there that uses HTML and CSS that you will still want to browse.
I think your core premise is incorrect (what point is there to all of this development if it is not to make end-users happy?), but your desired outcome is already being worked on: https://github.com/WebAssembly
You can't build many of the newer features - Local storage, HTML5 video, WebGL, WebRTC, WebCrypto, etc - just with a new instruction set, since these inevitably must connect to the outside of the sandbox.
The evolution of the languages themselves is only a small part of what's pushing the web forward.
For me, the role of the best place to go for all the Web Platform things is MDN. It's far from being perfect, but it's the only place that looks like is following what the growing web has to offer and is comprehensive enough to be treated as a some form of digestive documentation.
I, as a full stack guy leaning more toward front-end stuff, was personally unaware of the majority of the new HTML5 APIs available or coming to the web soon. I was repeating the cliche that there is a broad set of functionality that is available only for native platforms.
When I learned that is no longer true, I decided to put something from me to promote that. That's how What Web Can Do was created: https://whatwebcando.today. This is a small interactive overview of the device integration APIs available on the Web Platform today or tomorrow. Most of them were - or still are - tagged as "native only". I hope we can remove that label soon.
Most people who run a software platform (and want to make money off it) would be happy to have the problem of getting "too big" in terms of 3rd party software.
A good Java, Python, Ruby or R programmer knows where to look it up in the global "standard library" and since there is so much of that pretty soon your team has a project with 290 dependencies and you learn things you wouldn't have wanted to have known just to get the build to work so you commit the smallest change.
To be clear, the OP is talking about expansion of the platform itself, not libraries.
In my view, the real development burden is on the browser vendors, not the site authors. A site can always choose not to use some new feature.
Imagine if the web were designed so that each site were -- to borrow from Alan Kay -- a "fully functioning computer." It wouldn't be the web as we know it. A site could ship ship some fundamentally new behavior, and if other sites liked it, they could modularize and cache it. The focus in this alternate world is on building a system that is agnostic of its specific usage -- kind of like the Internet. A library system for the web would be the best possible outcome.
But that's not what the web is. It's a document viewer with featuritis. That's okay. It's ours. We're happy to have the curses of success.
I understand the sentiment but it's hard to stop a train this big in motion at this speed.
But the point remains, we've had: WebGL, SVG, Canvas, web workers, local state, etc, etc - some for many years - and _most_ of us use barely any of those. We do need time to use these APIs or libraries which make them more useable and explore.
I agree it's great as a general tool to stay up to date on generally interesting topics (ie. what the entire HN community finds most interesting / relevant). I'm thinking more along the lines of something catered specifically to what I find interesting / relevant. Sort of like a Twitter feed, but instead of following people / projects / newsgroups the ML algorithm caters my feed to highly relevant / personalized results. Because even when I'm following the most relevant people / projects / innovations today, I'm not necessarily learning about what will be the most relevant / interesting tomorrow.
This would theoretically make it possible for me to (more or less) stay "in the loop" without even trying.
The reason for something like this would be because there's no lack of information these days to consume. The problem is making most efficient use of our time where we're not digging for the most relevant information. Instead we're having it served to us so we can consume it passively without worrying that we're not able to "keep up" with the speed that things are happening around us.
"All occupations have an associated body of knowledge that practitioners dip into when they need to complete certain tasks. Visit any lawyer's or accountant's office and you'll find shelves filled with books that they use for reference."
This analogy to the tech world is appropriate. My dad is a lawyer. Very rarely can I ask him a question he knows the answer to off hand, unless it's in his very narrow specialty and regarding the state he practices in. But he knows how to look stuff up and has a general knowledge with which to analyze legal matters. And he's a civil attorney; he pretty much can't answer criminal issues at all.
I think programmers are a lot like this. If you want to grok the browser, you're going to lose. And that's not useful anymore anyhow.
This is the embodiment of "The Cathedral and the Bazaar", where you either have restricted development by a chosen few, or a truly open-source free-for-all where the best ideas win through popularity. Unfortunately for those that can't keep up, the bazaar is a lot harder to deal with, and a lot more chaotic, but it's the most free. Don't like an idea? Don't use it. Yes, things are changing fast, but that's why we are well-paid software developers. The young kids nipping at our ankles are the ones that will put us out of jobs unless we level up and keep up with the tremendous fast pace of change. We are owed nothing, and it's up to us to keep up, not for us to slow everyone else down because we can't keep up.
I don't understand why people write blog posts like "pushing for a one year moratorium".
Does this person think they are going to be the start of some movement that Microsoft, Google, and Mozilla will take notice and "non"action because of?
In any case, once you accept the fact that you'll never know the full stack completely then you just move on and don't worry about it. You do your best, and understand you'll continue to learn.
That said, I understand the point. But I think it's just as much about choices in libraries, frameworks as it is native technologies.
Coming from a background creating C++ GUI programs, the web's GUI creation process feels like cake. What is hard for me is all the options, best practices, and new JavaScript frameworks coming out every year. Settle down I say. Plain HTML, CSS, and JavaScript is easy. No need to add endless layers on top, IMO.
Next, you have to choose a backend. Okay, again, lots of options. All of this, combined, does feel like unnecessary mental anguish to get really simple things done. I think we got to this point because everyone started making their own "frameworks" and languages. With a single OS vendor there was a commonality. With the web, it is every tom, dick, and harry trying to lead the way.
One of the consequences of The Singularity is that new things get invented faster than the information about them can be disseminated and absorbed. The question is not how to slow this down. The question is how to cope.
I think a lot of the author's problem is that he's still approaching the web as a tool user, not as a tool creator. We have this artificial delineation between "back end" and "front end" developers, and "front end" developers are often treated as lesser. There is a huge emphasis on "don't reinvent the wheel", which I tend to think is more of a cry for help from the speaker that they don't understand what is going on and don't want the waters muddied for them any more than they already are. I even see people--styling themselves as expert front-end developers--questioning whether so-called "pure JavaScript developers" will be able to understand the changes coming with ES2015. Even people within the discipline don't think they/we are "real" developers.
I write text editors, compilers, and 3D graphics code in browsers. I built one of the first SVG-to-VRML-to-HTML/CSS transpilers. I built one of the first Canvas polyfills in 2007 by adapting code I already had from 3 years prior. I've built module systems and task systems (which are totally not hard problems at all). Over the last 10 years, I've probably built 5 of these things, each. I don't think of myself as a "front-end developer" or a "web developer" or any kind of "developer" other than what particular project I'm working on at the time. I'm a "simulation developer" when I'm working on simulation, whether that's in C or C# or JavaScript, it doesn't matter. I'm a "game developer" when I'm writing games, whether that is for PC or a stupid microcontroller or running inside of a CodePen session.
All of this is not to say "I'm awesome." I think I'm rather much average. The point of what I'm trying to say is: you cope with the rapid pace of change by understanding the principles. I can look at a Grunt or Gulp, a Babel or TypeScript, a React or Angular, and I can understand it very quickly, because they aren't magic. I know the way these things are built. Learning the particular systems is just finding the particular choices in those wheelhouses that the developers made for their particular cases.
This used to be a key component to computer science education. And in most cases, it still is. But a lot of people are coming to software development through paths other than computer science. And a lot of those that did come through CS have also drank the koolaid that academic stuff is not relevant in fast-paced industry. No. No no no no. It's the academic stuff that is your life vest in the deluge. But this lack of CS fundamentals is increasingly common, and especially so in the web/front-end world, specifically because it's becoming more accessible.
These things, this process, this overwhelming pace of progress, this is the point. This is what you want. The web isn't too big. Our capability of grasping it is too small. In the 60s and 70s, computer hobbyists could know everything there was to know about computers. By the 80s, only the truly dedicated could keep up with everything. By the 90s, the successful people gave up on trying to keep track of everything and chose a discipline in which to specialize.
The web today is no different, and that is a great thing. The web is no longer a document deployment system. It's an application platform. One that runs on every computer consumer users care to talk about. I can write one set of code that successfully and meaningfully runs on desktops, smartphones, and VR headsets. The browser has fulfilled the promise of Java.
I was recently talking with someone about many standards, many ways to accomplish things. Consider two popular areas of technology: web development and machine learning. If I had to guess, there are likely tens of millions of professional practitioners in these fields and also many hobbyists.
I argue that with such a huge number of people it makes sense to have a wide range of choices.
And the author's reluctant advice is good: keep up to date at a high abstract level and dive deep when needed for projects.
How many people have taken Andrew Ng's machine learning online class? That is just one source. How many people support ML with low level mechanical Turk type work annotating data for ML? But yes, my figure might be high right now, but ML is a fast growing field. BTW, I remember in the 1980s how international conferences would attract just small crowds, although we made up for small numbers with lots of hype and enthusiasm. And there would not be very many conferences compared to today.
Has anybody taken a look at seif[0]? It's Douglas Crockford's early attempt to separate the display of applications from the display of static documents in the browser. It would make this whole mess more manageable.
Moratorium will be a big waste. If developer has an idea and cannot start implementation immediately, the idea may not ever be implemented, he may not be able to recall the idea after the moratorium. If the new ideas are instead implemented in an experimental stream, immediate feedback won't be there.
To me the biggest problem with web development is, not that it's too big. I feel the information is really disorganised.
For instance, take most of the front-end projects; finding good documentation is a fool's errand.
I think we as a community need to put more effort in to making our projects easier to understand, we need to bring the barrier to entry as low as possible, we need to stop making assumptions that everyone starting out with our projects are people with 5-6 years experience, help people figure out where to look for information related to our project.
For example, a good project I really appreciate is, redux; the documentation that comes with the project is really amazing, I mean look at this,
https://github.com/rackt/redux/blob/master/README.md
They have put so much effort in to making their project really easy to understand.
Don't get me wrong, I love googling for information and learning new things, but some projects are pure annoying when you are trying to understand them for a quick poc or a Saturday evening hack.
We can't stop progress, its not fair if we ask others to pause progress just so we could catch up, just so it makes our jobs become easier. We are developers, we love experimenting, we love learning more, and we love making things work better, but lets improve on one little detail; lets help each-other out more.
I think the solution to all of this is to get back to declarative programming and idempotence, which will largely eliminate the trap of front end scripting we've fallen into. We need to switch from the client/server metaphor to trusted and untrusted computing and move to a p2p topology. This would be better served by the stdin/stdout/stderr stream processing of UNIX where each component does one thing well without side effects in a hierarchy of concerns where language is largely irrelevant and we focus more on the data, probably JSON streams processed with jq or something like it. Views would just be thin wrappers over black boxes handling business logic. Right now all of the glory is going to top-down development and processes like Agile/Scrum, but if we look to history, most of the interesting innovations happened in bottom-up egalitarian R&D. Now get off my lawn..