Regarding the community of this project, it is one of the best I have seen. The project management and review processes are top quality. I have been contributing to this project for a more than a month now, and it has been a great ride so far!
If any of you are looking for a top-notch repository to contribute to, drop in at our gitter channel[0] and say hi!
I think you're expecting it to be something more exciting than it is (I was too at first.) It's just a front end for static analysis tools. Coala doesn't do any analysis on its own. Everything is delegated to other tools. It just provides a way to pass configuration in and to parse the output so you know what lines in the source file have problems.
Code Climate does something similar with their Engines[0] specification. Personally I like how they make use of Docker containers to perform the analysis. It is perhaps a bit heavyweight for simple analyses but it makes it really easy to install different language runtimes and tools.
Indeed. coala is more of a framework that allows you to plug in any analysis routine. However, it also provides means to create new code analysis very easily and combine arbitrary routines.
So yes, there are some very simple generic algorithms (line length checking, spacing corrections, we have things like variable name checking in the works which only need a few lines "language definition") and you can combine them with - possibly wrapped or not - language dependent algorithms easily.
This is useful for the user because he doesn't have to learn and configure tons of tools and he has some basic functionality for every language - even the ones he writes tomorrow.
It's also useful for people writing static code analysis because they can just write their algorithm and be done with it - why should everybody rewrite a CLI interface, editor plugins and so on?
I wrote a thesis about code clone detection (another algorithm that is basically generic and would need only a relatively small parsing part per language, coala allows me to modularize meanfully into parsing and language independent AST processing) and it just took me a few functions that actually performed the analysis. People can now use it easily which usually isn't true for research programs and I had even less work than I would have if I had written it without coala.
Agreed that such tools can be annoying without the proper configuration. But I also don't really want anything automatically fixing my source code. That's a personal preference though.
coala always leaves you in control, you can tell it to e.g. show a patch before applying. If you explicitly configure it you can set it to automatically apply patches that are coming from certain analysis routines.
One of our awesome contributors wrote a little tool that makes it easy to script such videos, the video is updated on the website (readme will come soon) and script available and patchable on https://github.com/hypothesist/termtype/blob/master/coala_in...
Thanks, I've filed an issue at https://github.com/coala-analyzer/website/issues/27 . We'll definitely look at that. You probably know how manpower in open source projects with no commercial backing behaves :)
Thanks, I added a link right on top of http://coala-analyzer.org/ to clarify it a bit. The website isn't particularly good, we're working on a new one which provides lots of cool things and better information, probably implemented by a GSoC student.
However, I must say that very critical info for first-time users is still unavailable or very vaguely documented. For example, I still have no clue what sort of analysis can be obtained from this page : https://github.com/coala-analyzer/coala-bears/wiki/Available.... I can see a bunch of Bears but that's your terminology. I would rather prefer a brief summary of the tool's analysis or atleast a link to the static analyzer which a specific bear wraps over. For e.g. "AlexBear" - I honestly have no clue what that or 90% of the other bears end up doing with my source code.
You have put so much effort in the implementation, please improve clarity/coverage of documentation.
Also, installation (pip3 install coala-bears) fails on OSX. Any ideas on how to fix that ? Here is the error message : http://pastebin.com/tP3dZsik
So ATM the only way to get more info about a bear is by querying coala for it: `coala -B -b=AlexBear`
We have a project in planning stage to make a browsable webpage generated from the per bear documentation, we will also look a lot at the configurability and documentation of the individual bears for the next release.
Thanks a lot for incorporating the feedback. I have been working on a similar, internal tool within my company and I have realized the hard way how all capabilities can go waste if not communicated correctly to the stakeholders (developers in this case).
I look forward to contributing to coala if I end up incorporating it alongside my (internal) tool and would like an unimplemented functionality. Good luck.
Cool project, but gotta say that abbreviation feels a bit too forced....think it deserves more fitting name (no trolling, just thinking it would be easier to remember if the initials actually mapped to the keywords)
If any of you are looking for a top-notch repository to contribute to, drop in at our gitter channel[0] and say hi!
[0] https://gitter.im/coala-analyzer/coala