Hacker News new | past | comments | ask | show | jobs | submit login
I don't like Python. Does that make me a bad person?
151 points by thedigitalengel on June 26, 2010 | hide | past | favorite | 218 comments
I don't like Python.

I love C. I like C++ too. I used to write a lot of Java when I was a kid (as in high-school kid) but now that I've written quite a bit of C and C++, Java seems more like a toy language. I did enjoy Java back then, however. I yet to do any major work in C#, but it seems to be a powerful language, especially when handling certain kinds of complexity.

I've been learning LISP on weekends for two weekends now, and I find it interesting. I know little bits of Haskell (am planning to learn it later this year) and seem to like it too. Maybe I like the idea of liking Haskell, I'll know for sure once I actually start writing Haskell.

Coming back to the point - I don't like Python. I've never liked it. I've tried working on various Python projects but have always found some or the other excuse to prematurely back out. And mind you, I love coding - I spend most of my time writing code or learning new stuff about code.

My question to the hardcore python lovers is this (and mind you, by hardcore I mean the kind of people who wrote Exaile, not the kind who hack up a Python script every now and then to organize their iPod): No doubt many of you have come from C / C++ / LISP backgrounds. What is the paradigm shift you experienced? I look at a piece of C code in a certain way - how do I look at a piece of Python code? How do I think about Python? How do I think in Python (copied that one from Bruce Eckel)?




Yesterday there was an interesting link on HN comparing NLP in Python vs. other languages: http://nltk.googlecode.com/svn/trunk/doc/howto/nlp-python.ht.... I think this article shows the clearness of the Python language - - and this clearness and simplicity is the reason why I love to code in Python.


This is the main draw for me. Compared to Python, other languages (well, some) seem to require you to do so many things that just seem... unnecessary.

I couldn't believe all the junk Java required of me when I took a stab at it. It's like being confronted with some unhelpful bureaucrat: I know what I mean, they know what I mean, but they're damn well going to make me trudge through all the nonsense so that what I mean is turned into something the bureaucracy understands. The linked NLP examples are great for illustrating this.


You don't really appreciate Python until you try and program in something like Java again.

The following code is readable and I have required a variation of it in a program before:

    sorted([ord(c) for c in
        set('letters in this sentence') & set('and this one')], reverse=True)
Doing it in Java would be a chore now.


Here's a translation of that to Clojure, since I'm learning that now:

    (use 'clojure.set)

    (reverse (sort (map int
                        (intersection (set "letters in this sentence")
                                      (set "and this one")))))
Edit: Take two...

    (use 'clojure.set)

    (->> (intersection (set "letters in this sentence")
                       (set "and this one"))
         (map int) sort reverse)
(experienced lispers, feel free to make this more idiomatic)


Here's one using a list comprehension to make it more closely resemble the Python example.

    (require ['clojure.set :as 's])

    (-> (for [c (s/intersection (set "letters in this sentence")
                                (set "and this one"))] (int c))
        sort
        reverse)


Not sure why we are posting alternative language implementations, but here is the Scala version :). You could say that this was an exercise for me in learning 2.8, which I am unfamiliar with. A for comprehension isn't needed because Scala already treats Strings as a sequence of characters.

(Set("letters in this sentence":_* ) & Set("and this one":_* )).toSeq.sorted.reverse

Or you, can take advantage of Scala's rich choice of collections.

import scala.collection.immutable.TreeSet

import scala.math.Ordering._

TreeSet("letters in this sentence":_* )(Char.reverse) & Set("and this one":_* ) toSeq

The machinery Scala has in place to make the second example work is quite impressive, and I was pleased to see that it worked. Even though the intersection method returns a brand new set, using implicit arguments it correctly creates a set of the correct type (TreeSet) with the correct ordering function (Char.reverse), without any duplication of code in the standard library (like overriding '&' in every sub-class).


An easier Ruby solution would be:

("letters in this sentence" + "and this one").chars.sort.reverse


Except you aren't using sets, so you get duplicate characters.

I can just as easily write ("letters in this sentence" + "and this one").toSeq.sorted.reverse in scala. You could tag the end of your ruby statement with .uniq and you would be fine. Alternatively, you could wrap the above scala statement in TreeSet(). These solutions process things in a significantly different way than was done by the OP, though.


("letters in this sentence".split(//) & "and this one".split(//)).sort.reverse.join

I'm just learning, but I think this performs it correctly with sets.


I find the python code much more readable.


IMO, both suck at readability, since you have to read inner expressions first. An example with a "pipe" would be clearer.


I'm not sure any decent Java developers would encourage starting a new project in Java anymore. This is exactly the sort of thing that Scala is just as good at.

Scala isn't as succinct, of course, but it gets much closer!


If you look at my Scala examples above, they are both more succinct than their Clojure and Python counterparts.


Doesn't python have sorted sets? Oh, what a pity. Does python only have one single implementation of sets? Too bad. Is the runtime behaviour of that implementation documented? Let's take a look ... I'll be back. This could take a while.


Python has sets as a built-in datatype. If someone needs a different implementation, there's nothing stopping them from writing one.


A couple of points.

1. I love python.

2. The OP was being a troll.

3. People need to be more honest about themselves and their favorite pet languages. I can write a sorted set in assembler too, if I wanted. However, there's been several times that I've wanted a sorted set data type while coding in Python but found that none was easily available. I hate the, "you can write your own!" counterpoint, because it is essentially meaningless unless you can prove that writing your own is as trivial as importing someone else's implementation.


I didn't mean to imply that lacking a sorted set datatype wasn't a pain in the ass; I was pointing out that the lack of multiple set implementations is a tradeoff that we make for all built-in datatypes that could support other operations but don't. (Personally, I would love to have the ability to pick a random element from a set. That would be great.)

By the way, in case anybody was wondering, here is a (relatively inefficient) way of making a sorted set in python:

http://code.activestate.com/recipes/576694/


Couldn't you just do the following?

  from random import choice
  choice(list(myset))
(I'm guessing you'd want to do it without having to convert it to a list first?)


That's what I ended up doing, but this was happening with a very large set, right in an inner loop. Luckily this was running in batch mode, so I could go get lunch or something instead of taking the time to make it faster.


Java is, as I said, a toy language.

A language that needs you, the client programmer, to define getFoo () and setFoo () methods manually can't be right.


Java doesn't require you to define getFoo() and setFoo(). Some best practice advice suggests you do. But you can leave the vars public and just use the . operator, if you choose.

It seems like Java wasn't defined in order to be an efficient for a three-person shop to hack in. It was meant to be efficient for a 40 person shop to build code as a team that they can maintain.

That said, I dread going back to my Java projects and updating them, even though I tried to maintain best practices in order that updating be as simple as possible. I've been immersed in php and javascript lately, so we'll see what it's like to go back.


Java certainly was not designed to be this "enterprise language" it is now. It only was similar enough to Smalltalk to replace it. (and Smalltalk has this culture of "if it is repetitive then extend IDE to automate it", which to some extent influenced Java IDEs)


My impression, although I'm not an insider and could be off base, is that it wasn't conceived to be an enterprise language, but its patterns were. I learned it when I started application programming for my startup (I had been doing scientific routines, i.e. FORTRAN and Mathematica), because I was curious about OO. A lot had changed since I hacked out a little C and assembler on my Mac+ many years ago, and the scale of the problems programmers and programming teams had to handle had changed significantly. It seemed OO became popular as a way to handle this scale of things. Java was presented as a solution for building maintainable apps, so I learned it.

I also tried to stay close to the OO paradigms (of course, I was just learning them), as I built this 50000 line beast called "Egorg." Now, I get to maintain and update it, so we'll see if it paid off.

On a side note, I have a real distaste for objects in Javascript. They seem clunky, almost like they were stuck on as an afterthought (I get a similar feel about Generics in Java). But the design of Javascript makes it quite easily to hammer out a script to do something cool. I'm worried about maintenance, though, because I feel like I'm evolving my own design patterns, so retracing my steps will likely be painful.

You are correct about the IDE thing. That does ease a large burden in Java.


>> feel like I'm evolving my own design patterns

I think that this is happening to many people who are doing a lot of 'non-trivial' JavaScript. The language is still very young and because the concepts that were chosen when creating the language are powerful and flexible (prototypal inheritance, objects can be accessed like hashes, first-class functions etc) there's always a million ways to code something.

OO-wise i think you can divide into those who use the `new` keyword and build objects that are more or less Java-like classes and those who create objects and prototypes on the fly, and probably use a lot more ideas from the functional and LISPy side of things.


I have somehow settled into the pattern of using new/java-like classes for large objects that store a logical set of data and need to operate on it, and on-the-fly prototypes to groups group just data (or, perhaps, a trivial operation).

My worst crime by far (that I am aware of) is that I have no hesitation to use a global variable if it saves me from bending over backwards to get around it. I'm doing animations, and sometimes the algorithm just comes to me a lot faster if I let a var keep a global value and be used by lots of functions between calls to my animate() function (using window.setInterval() ).


Most of these patterns come from Smalltalk (which was probably the original language of complex enterprise OO systems). I think that what made Java this kind of entrprise language is that it was reasonably similar to smalltalk so all these patterns and concepts can be reused and on the other hand it is similar to "normal" programming languages so average programmer does not have problems with it's workflow and IDE (as is the case with ST). On the other hand most uses of design patterns in Java code (eg. 90% of uses of Observer pattern, especially in cases when it is called Listener) seem to be to be compensating for fact that Java is not Smalltalk(/JavaScript/Ruby/Python/C# whatever with first class classes and methods).

JavaScript is language that is very different from anything else. Depending on how you look at it it has either unconventional semantics (majority view) or unconventional syntax (my view). It's object model is strong and often useful (you can emulate class-based inheritance with it, not the other way around), but I agree that it is to some extent afterthought (although I have strong feeling that our reasons are very different). What is probably largest problem of whole JS is the "Java" part of it's name and weird syntax deliberately designed to "look-like-Java" or "look-normal", which simply does not match underlying semantics.

Coming from C/Unix background I actually consider the IDE thing burden in itself. I expect that there are some easily editable source that is transformed by series of some steps into final program, but all changes happen only in the original input. Which is simply not case in Java/ST world, where you essentially need pretty complex IDE (on the side note: few years ago some Java IDE I was using for quick experiment forbade me from saving source file with syntax error with it... WTF?).


LOL- I think I had the same IDE when I was first stumbing about with Java! It was definitely a WTF moment. It became useful for code expansion, refactoring, and unit testing however. I'm now using vim to develop php and Javascript and I miss its convenience (on the other hand, I'm a lot better with sed now...). But using Firebug to find my errors is a bit slow.

What I find really cool about your comment is that you appear to have a deeper view of it all (meaning a deeper understanding of the underlying semantics), and I'm wondering how my view will change as I continue to work with it.


Interesting about objects in javascript. I have actually started feeling that prototype-based object orientation is less tacked on than its class-based cousin. The syntactical implementation is a bit strange (the 'function' keyword for objects, putting functions on the prototype object rather than the object itself), but the mechanism itself seems more generic and flexible. The implementation in Io is better I think, you should check that out before discounting prototype-based object systems outright.


Oh, I won't discount them (I'm not good enough to be a prima-donna about it), and it's nice to know that I will see how to use it to my advantage as I play with it more.


Urgh, I learned Java in school and recently had to use Java at work to do some stuff with a bad XML schema (think lots of "<parent><child><grandchild_1 /><grandchild_2 /><grandchild_3 /></child></parent>" wtf is element "child" for??). I used XStream for it, and started off trying to write the objects the "right" way, i.e. with private members for child nodes and getters and setters for them. However, having drank deep of the Python well, I got about 20 minutes into it before I realised that my time was better spent writing the actual logic of the app and did a :s/private/public/g. The pain from my ulcer faded almost immediately. :)

*I realise that pro Java coders use Eclipse to generate their getters and setters, but I have also drank deeply of the vim kool-aid and find myself going to ridiculous lengths sometimes to avoid other editors or IDEs. I keep trying to navigate with the wrong keys and trying to save with :w and using :sp to open other files when I use them.


One thing to look out for here is this: if you are writing getters and setters for primitive data types like strings and ints and stuff you're doing it wrong. Instead of

  public Customer(String name, int custId)
do

  public Customer(CustomerData cd)
where CustomerData is an object. In Customer you would have getters and setters for CustomerData rather than each item that identifies your customer. This is for extensibility: As your app grows, some part of it may need other ways to identify the customer. Adding these ways won't break your Customer interface. That's the purpose of the rigor of the design standard.

Now what I do, which goes against stated policy, is keep the CustomerData variables (String name, int custId, and so on) public so I can set them easily. The only time I require getters and setters for primitive types (for example, name) is if I were to read in data from a user generated form that needed to be sanitized.

My apps are full of these customized data objects, and it's a technique which has saved my ass as I added features etc.


Now what I do, which goes against stated policy, is keep the CustomerData variables (String name, int custId, and so on) public so I can set them easily. The only time I require getters and setters for primitive types (for example, name) is if I were to read in data from a user generated form that needed to be sanitized.

It's funny, too - though this type of thing goes against best practices, the primary argument for always using getters/setters (some day you might need to validate input or do other sorts of logic, and it's a pain in the ass to put that in after the fact) is pretty much 100% shattered these days, because any IDE worth a damn makes it possible to switch from public field access to getters/setters in a couple of keystrokes ("Encapsulate field" and "Inline method" are typically all you need).

Though it's worth keeping in mind, most of the Java best practices tend to assume that you're writing code that other people will have to use, people whose codebases you won't have access to after you release your code, and that's a much more difficult context to program in. It requires a lot more bureaucratic nonsense to leave your API flexible when you have to worry about breaking other people's code with each edit you make, and extreme paranoia is more than justified when that's what you're worrying about.


ouch.

If you encapsulate all the data about a customer in a "CustomerData" object, do you really need a separate "Customer" object? Or do you mean, not the whole customer but larger bundles of information like addresses etc. that don't need any business logic or abstraction but are nice to have as one single bunch of data?

(I mean, there's a reason why C++ still has "struct"s even though OO zealots will tell you they come from the devil).


The latter. In my contrived example, I meant that "Customer" was the main object you would use, with methods etc. to work on customer information. Then the data that would pass between objects is encapsulated in its own (very simple) class.

Edit: I didn't parse the "ouch" the first time I read this. Sorry, I'm not trying to be an ass about it. But when I first started trying to figure out the "right" way to do it, I came across something (probably by Allen Holub) that confirmed something I was suspicious about: There is no sense in having a private variable if you are just going to expose it with public getters and setters (except for sanitation purposes). The real point was to pass in the data through data objects, and keep the private members unexposed. This also means (typically, at least for me) a lot fewer getters and setters. It's completely against the Java Beans "way," I think, but it works well in real life. I use Java Beans only for persistence, because I like the automatic JB-xml conversion. That is, I make data objects out of the data that I want to store into a Java Bean, and do the automatic xml storage and read thing. It took me a while to figure that out. My first app, I hand-coded the xml. Ugh.


I don't get what you mean by "toy language." Clearly some big, powerful, practical stuff has been written in it. Usually the phrase "toy language" is used to refer to those with little practical use. Java is unwieldy, but it has a rich ecosystem and compiles to very efficient code across a broad range of platforms.

If you want Java without the unwieldiness, try Groovy. It's Java with a ton of syntactic goodness added; for instance, the default variable scope is "create a getter and setter which I can optionally override." Groovy interoperates well with Java libraries. The downside is that it's a dynamic language, so there is a significant performance penalty.


As I said earlier.

"Secondly I don't mean to criticize Java. By calling it a toy language, I was simply referring to the fact that Java tends to make writing bad code difficult, and in doing this takes away some of the flexibility and power that you tend to associate with other languages."

Judging from the general reaction it seems I should not have used the term "toy language". My apologies if I ended up implying that Java has little practical use.


> Java tends to make writing bad code difficult

I've known people who could write bad code in Java with the utmost ease!

It'd probably be more accurate to say that Java deliberately limits its expressiveness, in order to make it harder for people to shoot themselves in the foot, and to make it easier for one programmer to understand what another has written.


> It'd probably be more accurate to say that Java deliberately limits its expressiveness, in order to make it harder for people to shoot themselves in the foot, and to make it easier for one programmer to understand what another has written.

That is exactly what I meant.

Bad programmers exist. They will write bad code, no matter what language they're made to use.


> Java tends to make writing bad code difficult

If this were actually the case, it would be a bonus point for Java. It isn't, however. Nor is it the case for Python, Ruby or even BSDM languages like Ada.

How many times have you come (when searching for a used car or trying to reserve a table at a restaurant) to a slow, ugly, non-functional URL that creates an error message if you press the wrong button at the wrong time? Usually those URL ended in ".jsp", ".asp[x]" or ".cgi" or ".php".

I'd argue, that the largest amount of bad code that exists is in Cobol, followed by Java, C# and then C/C+, PHP and various BASIC dialects. That has nothing to do with how good those languages are but rather with the facts that most code out there:

a) is created by IT departments or outsourcing firms staffed by mediocre or completely disinterested programmers (I've never been in such an environment, but my hair stands up every time I read horror stories on dailywtf or on progit)

b) shouldn't have been created in the first place (there are commercial and open source packages that do this) if it weren't for the NIH symptoms. There are tons of restaurants with their own order/registration forms despite the fact that OpenTable is widely known and available. There are tons of non-technology companies writing their own accounting systems even though there's a multi-billion/year industry around that type of software (that employs programmers who understand not just J2EE but accounting itself as well).

c) is forced upon its users (due to the sunken costs fallacy) and thus isn't exposed to market forces

d) is written in those languages as they're most common and are either easy to learn (PHP, BASICs) or are widely taught in colleges or trade schools (Java, C#, C++)

Truth is, there is no magic language bullet. Some languages are more expressive, more pleasant to program in. Some produce faster code. Others are more scalable in the sense of making it possible to write whole systems in one language. Some are more "safe" in the sense of being less prone to garden variety security attacks, less likely to crash the entire machine when there's a bug (at the cost of imposing restrictive abstractions on the programmer).

No language, however, is a substitute for a team of competent and interested developers solving a relevant problem.


c++ feels incredibly bureaucrat to me too though. like separating definition from declaration into header files.

what do you think of ooc-lang syntax? http://ooc-lang.org/


I think the header-file and source-file distinction was more of an architectural decision (so that you could implement an efficient compiling and linking system) than a language design decision.

Just an opinion, though.


of course it was architectural, but that's precisely the problem - why are we still using a language designed for an age of computational scarcity?

another example: functions should be virtual by default.


another example: functions should be virtual by default.

No no no!

C++ got this one right, and then C# did better ('virtual' to declare a virtual/overridable function in the base class, and 'override' to replace it in the subclass).

http://www.artima.com/intv/nonvirtual.html

Every time you say virtual in an API, you are creating a call back hook. As an OS or API framework designer, you've got to be real careful about that. You don't want users overriding and hooking at any arbitrary point in an API, because you cannot necessarily make those promises. And people may not fully understand the promises they are making when they make something virtual.


Python for me is the language closest to pseudocode. There are no decorations and few gimmicks. You simply write your ideas down into code.


On the contrary, it has had decorators since Python 2.4.


"decorations", not "decorators".

I think it's fairly clear from context that he/she means, "not a lot of red tape", rather than a reference to specific language features.


PEP-318 (http://www.python.org/dev/peps/pep-0318/) refers to a decorated functions as a having a decoration.

Counter-edit: My response was in jest. I deliberately misinterpreted the term for humorous purposes.


> Python for me is the language closest to pseudocode.

Python for me is the imperative language (with some functional features) closest to pseudocode.


The above comment applies here as well.


I agree that short python samples are as clear as a high level language can possibly be. Implementing algorithms, for instance, becomes incredibly easy in Python. I face problems when Python programs get above a certain length.

I've done some Django programming in the past. A view gets a request object (pardon me if I'm wrong or if this has changed recently) and is supposed to then render the response (or something similar, I don't remember exactly). Now what is a request? If Django were in C, I would know what a request object really was, grep for it, and look up the header file. If request has a GHashTable named post_data, I don't think I need two guesses about what it exactly is. Same goes for C++ and C#.

Then you've inconsistencies. The same method may return a string object or an integer on different calls. And this happens in practice - I remember when I was (trying to) model a relational database in Django, I found a ManyToMany field returning (when invoking values ()) a list of dictionaries instead of a list of Model objects (which I was expecting).

Of course, if I was not dumb enough to not read the documentation I might not have stumbled; but the point is that Python allows this to happen. And since I don't have the benefits of a compiler, anything other than basic syntax errors (the ones I can catch by calling py.compile) explode at runtime.

The thing I've begun to notice that scripting languages tend to require you to hold the entire structure of the program in your head as you code. In C++ you know that unless you've gone out of your way to do some casting magic, a string get_foo () will always return a string. And you can always do get_foo ().c_str (). In python you're never sure.


You seem to be complaining that when using a language new to you in combination with a large framework written in that language you would have been forced to read the (very good) documentation to figure out what was happening.

The inconsistencies you mention are (in a well written code base) very rare. If you call a method get_name() odds are you are going to get back a string. If you seem to randomly get back an int or a dictionary or a datetime object it's a pretty good sign the code is foobar'd. Good Python programmers don't abuse the dynamic nature of the language. Methods should return predictable types (documentation is important), it might return different types depending on the input but this should be documented and obvious (say a parse date method that returns a datatime object if you pass it a string containing a date and time). The values() example you use is very clearly documented as returning a ValuesQuerySet that supplies dictionaries not models (http://docs.djangoproject.com/en/dev/ref/models/querysets/#v...).

In a number of years using python, the standard library and 3rd party libraries I've never been left wondering "If I pass X to this method what the hell type am I going to get back". Just like in C/C++ I need to remember it returns a list or a string or a dictionary but that's usually where the mental overhead stops.

Python is dynamic which means it's flexible. Flexibility is usually reached by sacrificing simplicity so Python encourages testing and good documentation. I would argue that these are both necessary features of any good code base, Python is just more upfront about this than a number of languages.


I'm not really not complaining about anything - the shortcoming clearly lies within me and not in Django or Python.

The purpose of this thread was merely to gauge what kind of paradigm shift people who were used to programming in C or C++ generally experience when they move to Python. Some people seem to find making the jump easier than others - I personally have found it difficult and have made little progress so far. Since I generally consider myself good with computers, I find this a little unnerving.


I'm surprised no one has mentioned Peter Norvig yet.

http://norvig.com/21-days.html

From your comments, I kind of get that you want to be a real good programmer, and part of being a good programmer is to understand the different way you can wire bit and pieces together in the many different languages. I had a similar experience as you in that I went from C->C++->Java->Javascript->Scheme->SML->Python. I can definitely tell you that learning Scheme and SML first helped me tremendously in understanding dynamic languages in general. In more concrete terms, understand typing rules has helped me how to debug a program when I mess up in Python. For example, in SML, I can completely omit type annotations and the interpreter will infer the right type for me at parse-time, and that each function or statement must only return 1 type.

In dynamic languages such as Python, you can still do what you can do in SML or Haskell, but the runtime system makes no guarantee that your program as it is written is type-safe when the interpreter parses all your source files. Type checking is done at execution, when the code path in question is exercised. However, if you THINK about how the types are infered in SML when you write in Python, you generally will not run into TypeErrors.

The same goes for functions that return different types. To prevent tripping up with these functions, just apply what you've learned in Haskell or SML when you are debugging library functions that do this, and avoid writing functions that return different types.


I think you just need different techniques for Python than for C. You can write a dummy view that calls dir() on the request object (which will probably tell you how to get the POST data), asks it for its __class__, asks it or its __class__ for their __module__, asks for sys.modules[request.__module__].__file__, and so on. If you just want to know what's in the request object, rather than what its methods are, you can get its __dict__ and print that out, and then you get not just the type of its contents, but an actual example. When you have a REPL and an example of an object, you can call help() on the object and get generally pretty good documentation on And there's the inspect module, like LordLandon_ pointed out, but which I didn't know about. And when the going really gets tough, you can monkeypatch random libraries to change the way they work (e.g. to add logging) in order to debug the problem. All of the above applies to Python development in general, not just Django.

Django in particular has several additional advantages:

• it has an "interact" mode that pops you into a Python REPL so you can poke at all of this stuff interactively instead of having to save a source file and hop over to your browser;

• it has really first-class documentation;

• if you raise an exception in debug mode, the traceback includes dumps of all of the objects mentioned on all the levels of the traceback stack, along with a few lines of source code from the stack frame;

In short, Python offers much more power to undertake the kind of analysis you were wanting to do than C or C++, but you have to do it dynamically, instead of statically.


> If Django were in C, I would know what a request object really was, grep for it, and look up the header file.

If Django were in python, you wouldn't even have to grep

>>> import django.http, inspect >>> print inspect.getsource(django.http.HttpRequest)


Or even easier: ipython >>> from django.http import * >>> HttpRequest?? # prints out source code for HttpRequest


You sound that you will like Haskell. It's type system can be really paranoid (it lets you express many invariants through it), yet remain very flexible. And If you can't fathom the segregation of (side) effects, there's still Ocaml.


OCaml is a dying, soon obsolete, language. Better use F# which has way better libraries and tools.


As I understand it OCaml is in heavy use at a number of shops in the financial industry. I wouldn't say it was dying just yet. On the other hand F# is a new and as compared to OCaml untried and untested language.


Number of financial companies using F# is about the same as for OCaml (but numbers are not evolving in the same manner). F# uses .NET which is way more tested than OCaml runtime. F# uses .NET libraries which are way more tested than OCaml libraries. F# core language is still a bit less tested, but considerable work is being done. In contrast, there is only fulltime person working on OCaml. OCaml community is dying, while .NET community is huge.


.Net being tested is not the same as F# being tested.

The idea that the OCaml community is dying is purely subjective. OCaml, having been around longer than F#, may just be going through a perceived lull.

Either way generalizing that F# will replace OCaml as the goto language in the ML family from a perceived decline in the OCaml community is probably not the right call.


We also use Ocaml here (Citrix), and people agree that Ocaml seems to be going nowhere. Some projects have started switching to Haskell.


Don't overlook the fact that F# is build on top of a proprietary, potentially patent-ridden platform. For many people, including me, that's a non starter.


This is something I run into with perl/php/python/ruby/etc too. Perhaps I've just gotten used to outsourcing some of my brain to the compiler, but I really like to lean on it to get good results. Run-time blow-ups suck, and I've always preferred using a compiler to writing all these test cases to catch run-time type errors.

I really like using a dynamic language for some tasks. I almost never use C, C++, or Java for any task I hope to start and complete within a day. Those jobs are small enough it's easy to keep track of what you are passing around.

I spent 6 months writing all of my utilities in Python. I tried very hard to like it, but it just didn't tickle my fancy. I do like perl and ruby for a dynamic language. I think perhaps the issue is, if you are like me and you don't like using dynamic languages for large programs, the extra clarity of python isn't enough of a added bonus.


To complement what other people are recommending here, I'd say that in cases like this I only sometimes read the documentation. Other times I put a line in my code:

  import pdb; pdb.set_trace()
which makes it enter the debugger at that point, and I simply inspect the object that I'm really getting, looking at what it supports, the documentation on its methods, etc.

Doing it this way is not always feasible, since sometimes there's a prohibitive amount of setup code that needs running before you can get a real object. For short code paths though, like Django apps, it works really well.


what makes you think you can't grep for the request object/class?


What I implied was that it was not immediately obvious what the class was.

def foo (self, stream): What is stream here?

void foo (std::ostream &out) { out << "bar" << endl; }

Hope you get the point.


So, grep for what's calling your function, or use pdb.set_trace()


The ruby examples don't behave the same as the python version since they use ARGF in place of stdin. They are also non-idiomatic. Here is a correct, idiomatic Ruby version:

    $stdin.read.split.each do |w|
      puts w if w.match(/ing$/)
    end
or as a single 57 column line:

    $stdin.read.split.each { |w| puts w if w.match(/ing$/) }
we can also use the match operator:

    $stdin.read.split.each { |w| puts w if w =~ /ing$/ }
I'm inclined to assume that the other language examples are also suspect.


Why doesn't anyone ever mention my favorite language, Tcl?

foreach line [read stdin] { foreach word [split $line] { if [string match *ing $word] { puts $word } } }


One reason is that people haven't looked at Tcl in a while and so they "already know" what Tcl brings to the table. There have been a lot of changes/additions to Tcl in the last couple years alone. It is a very nice and consistent...and to borrow from Ruby...FUN language to program in.

The one thing that gets me is the "it's already been done" mentality. If you love Tcl then you want to create things in Tcl for the world to use. I have thrown out a couple ideas like "mailman in Tcl" or "nagios in Tcl"...but I always get "it's already been done". Sure, I could have done it anyway, but it gets your spirit down when you hit that wall. I know they have been done with other languages. Let's do it in Tcl to see if it can be done better!


Yeah .. I have not faced this kind of wall until now but I get your point.

The whole scripting language issue is a complicated one imo.. People want to either use 100% scripting or 100% system (i.e., C), which is not alright. Combine scripting & system according to your needs. This is why I like Tcl the most: It prevents you from using it where there might be computational bottlenecks (e.g., in algorithms). It is meant to complement C, not replace it. Anyway this is a separate thread.


Your code sample isn't quite right.

[read stdin] returns a string (not a list of strings), which is parsed as a list when you try to loop over it. The parsing breaks the string into a list of words, so the second loop is redundant. The parsing could fail if the input file isn't valid list syntax (unmatching {}'s). [split _] makes it safe.

    foreach word [split [read stdin]] {
        if [string match *ing $word] { puts $word }}
I love Tcl too! Tcl is a scripting language for C, Python is less-powerful version of Lisp.


Yes you're right hehe. less powerful version of Lisp? that's weird but oh well ..


Don't know why, but I feel the need to contribute a shorter Ruby sample that feels closer to conventional Ruby. :)

ARGF.lines.each { |line| line.split.each { |word| word.match(/ing$/) and puts word } }


Equally cryptic to perl syntax imo..


I'm a guy from a C/C++ background who did linux kernel development, and picked up python along the way and used it to do desktop tools. I used python (without knowing any of it) to write a disassembler for a project I was doing in 2003 on a non-x86 processor. It took me 5 days to learn the language enough to do so and to write the entire tool, at which point I "got it".

First off, Java isn't a toy language. You've characterized it improperly. Java is a bureaucratic language. You have to cross your t's dot your i's, fill out your requests in triplicate at the FactoryFactoryFactory to get the objectOfYourDesire. Why is this so?

This comes back to how C++ works on projects, especially large projects and midsize projects with a couple mediocre programmers thrown in (because, in commercial software development, you rarely get control of your team). C++ does not work well for large team development unless 1> You have very capable people 2> and you can fire everyone who steps outside the "defined okay subset" of C++ you use on the project. The language is too big, certain parts make it too uncertain, etc, for people to just willy nilly use all the language features. So Java, is basically Sun's "redoing" of C++ where they took its wild west nature, and made a BureauDisney version where you can't do a dozen things that happens in C++ all the time when good programmers go off the reservation or bad programmers enter the equation at all.

C++ has uses, but it's the sulfuric acid of programming languages: you have to have good personnel control and fire those who screw around outside the boundaries of the right style. A programming group with one guy programming like C with classes and another guy programming like it's Ocaml without garbage collection, and you're in for a mighty piece of hell.

So that's where Java came from. That's why it is what it is, and I hope I've highlighted a non-business issue with C++ that comes up from it's overabundance of "understanding breaking powers" which don't mess well with total programmer freedom.

Now lets look at what python was, and what it is now: Python was originally designed as a substitute for the pascal/BASIC line of programming languages (specifically a language called ABC). It was designed to be very very clear. It still is very very clear. It is still quite terse compared with C++/Java/C, but it's wordier than perl and ruby, but very much clearer for even non-python programmers to decipher much of the time.

Over time, it grew into an Application development and webserver development language. Why?

It has a very easy way to interface to C code. This is important, because C does great module level integration, but once you get at the system level, you start to get issues with namespaces, tracing bugs, etc. So python became an alternative for tying C code together.

It writes very quickly. While you may not understand how to write it quickly yet, usually python/ruby/perl will be quite a bit more productive on a feature basis than Java/C#, and tons more productive than an equivalent C/C++ project. This has to do with the fact it takes very few lines of these languages to "do a job".

For you currently, you're possibly less productive in python right now than you are in C++; I've honestly found this doesn't hold true more than 40-80 hours into learning python development while doing a real project for almost anyone who can handle commercial C++ software development. The C++ people do python faster than they do C++ and it has many fewer issues in the end than C++ they would have wrote. We use weave.inline or something else to speed up the inner loops with some native C, if even required for that application, and all is good, it works just as fast for the user or application, and was completed much faster with much less obtuse bugs.

If you spend those 40-80 hours on the language, you too will likely be a faster developer in python than C++ for many features.

Some tips: You are not doing C, you are not doing C++, you are not doing Java. Don't pretend you have to do the bueracracy that those languages require. If you write a python program with absolutely no functions, it will run just fine. So if all you need to do is process a small amount of text or open a little server, you can do this in a flat file with just a series of lines of code striaght out of main() in a C program. However with the size and completeness of the python standard library, you'll rarely have to write the rest of the functions.

Secondly, it's important to learn idomatic python today. If you write things like "it's supposed to be" in python today, you get speed on par with C/Java (or only 3-5x slower), and it's really freaky to see an interpreted language do that.

Thirdly, it's important to learn and use the standard language rather than writing code. Python module of the week blog is a great resource to do this: http://www.doughellmann.com/projects/PyMOTW/ Much of python's power comes from a stupidly complete standard library that does much of what you need to do so allows you to write 20 line programs.

Lastly, learn easy ways to fall back on C/C++ in python. This way when you use python for an application it turns out it's too slow for, you can still add a bit of C/C++ to get the speed you need. You can embed C++ inline and have it compiled on the fly with weave, so I suggest you learn that.

http://www.scipy.org/Weave for getting it

http://www.scipy.org/Cookbook/Weave for examples of use

http://www.scipy.org/PerformancePython for examples of how fast this runs vs other things.


""" Java is a bureaucratic language. You have to cross your t's dot your i's, fill out your requests in triplicate at the FactoryFactoryFactory to get the objectOfYourDesire. """

This is such an absolutely fantastic way of succinctly describing why I dislike Java so much. Bureaucratic!

Absolutely fantastic post, get this on a blog somewhere. I seriously wish I could upvote you more.


It's also a great way of specifying why I love it. Having been bitten on the ass one too many times by typing problems in a Python project, the feeling of confidence I get from doing the Java bureaucracy (to some extent mitigated now I do Scala instead) is something I have really come to love.

It's like a security blanket for me.


(Disclaimer: Not meant as flamebait)

Not to be a prick, but if you're having "typing" problems with Python, you're using it wrong.

Types are almost irrelevant in Python. What matters is whether, in a given situation, the method you're calling is supported and does what is intended. The actual type/class of the receiving object doesn't really matter, ditto for any arguments you pass to the method.

While this may seem highly unsafe and brittle, it has important benefits. You can just hack up some code that "just works" in a few minutes, without having to sacrifice a goat to the compiler first, making sure that all the correct types are specified, subclasses are defined, interfaces are implemented completely, etc. The rigid type systems in other languages often require this and more; making changes to your software becomes a chore. Rapid prototyping this is not.

In Python, when used correctly, you don't have this problem. Write a bit of code, write a test, run it, and decide what to do next. Ideal for testing out ideas. It doesn't work? Throw that shit out and write something else; two minutes later you'll be testing your new code again. Yes, you do need to have unit tests, but the same is (or should really be) true for most languages. Just because something compiles, doesn't mean that it does what you want. (Except maybe in Haskell. :-)

Interestingly, the quickly-written prototype might well be good enough to become the actual production code, with some additions. Just keep you tests up-to-date, for reasonable use cases, and there's no reason why your code shouldn't work in real life. Sure, that function foo(x) that wants to call x.bar() will fail if you call it with an x that doesn't support the bar() method. So don't do that, then! ;-)

Anyway, to summarize, I personally would much rather have Python's flexibility, paired with unit testing, than Java's lack of flexibility paired with compiler checks.


Well of course I'm doing it wrong, I got crashers during execution :)

I understand the concept, but sometimes in a medium-size project, you just accidentally send the wrong object back. It happens. But Python won't tell you, and it'll blithely wait for the code to get exercised before dying. I don't remember exactly why I was having this problem, but the bug wasn't shallow, and required a certain confluence of exceptions to occur before it would fire. The sort of thing that would even escape most unit testing.

I would never force static typing on Python or Ruby or upon any language that didn't want it, and when I switched to Python I was very pleased with dynamic typing. However, I find myself much happier with the stronger compiler checks. Writing a medium project in Python now frays my nerves!


"I don't remember exactly why I was having this problem, but the bug wasn't shallow, and required a certain confluence of exceptions to occur before it would fire. The sort of thing that would even escape most unit testing."

I agree that this does indeed happen... I think it's a matter of, the more flexible a language is, the more rope you get to hang yourself with, even unintentionally and unexpectedly.

(Anecdote:) For my work, I have been developing and maintaining a fairly large Python code base... (~70K lines... comparable to about 700K lines of Java <wink>). Another developer changed a module so it would access the database upon importing, which was a no-no (IMHO), so I changed it back, using a class that gets the desired info from the db upon instantiation, and some sort of singleton construct so the same instance would be reused. I also decided to write a test to check that when this module was first imported, there would be no instance of said class yet. To make sure it was a first import, I removed any references to the module from sys.modules, then did an import. This worked great... until I ran the whole test suite using nose [http://code.google.com/p/python-nose/], which has its own ideas about import order and such. This turned out to be surprisingly hard to fix... a non-shallow bug that wouldn't even be possible in less flexible languages.


Regardless of the language, when you get to over 50K lines of code, cute tricks need to go out the window.


Honestly, I program in Python every day and my biggest complaint is that duck typing and dynamic types as a paradigm is not well suited for large projects (like the ones I work on). When the codebase is larger than you can keep in your head, the types become a huge issue. When a bug is at one level of an application you have to figure out what argument was passed in. In a language like Java or C#, it is trivial to follow the trail of objects. In Python, it is cumbersome to say the least. Likewise, if you do adopt duck typing you will eventually find that there is no way around dispatching via types at some point. Again, it is not that huge of a deal, but it ends up being boilerplate-ish code you don't want to write.

Tests are effectively the answer, but they are a pain to write and having to document your type information via tests seems a lot more cumbersome from just doing "int my_var".

The party line with Python and dynamic typing is true in many cases, especially when starting a new project. As time goes on though, things get confusing and tests are rarely good enough to offer the same contract static typing offers. To say "you're doing it wrong" is somewhat correct, but no one does it right all the time.


protip: the next time you need to quickly find the type of a variable, insert a trace like:

import pdb;pdb.set_trace()

then run the program, and you'll get an interactive prompt and you can just look at everything in that state.

dir( myBuggyVariable)


Note that you start with "you're doing it wrong" and end with "I personally would much rather". :) You also switch from arguing against type checks overall (could be Haskell) to arguing against Java.

I'd say that strong compiler checks become more helpful as the size of a program increases. You can keep less of a big program in your head, and big programs have more code paths that could potentially need testing.


Hm, I didn't mean to argue against all type checks, just the flavor of static typing as described by the OP (i.e., Java). I don't consider Haskell's type system "rigid"; it's strict as hell, but it's also very expressive.

Either way, both flavors of static typing require development methods that are very different from Python's, though. My original point was that in Python, you shouldn't really have to worry about types, or checking them (in tests or otherwise). You can, but (IMHO) you'd be going against the grain of the language.


I think zeph is saying it a little strongly, but doctests find a HUGE portion of typing errors with python. Additionally, many people who do have typing problems find many go away when they make their python code slightly more object oriented.

Here is a link to doctests. There are godly, and unlike any testing library, in that they aren't a library, they're a comment scanner: http://docs.python.org/library/doctest.html

Smallest amount of rigmarole required to make testing happen I've seen in any language.


If you like static guarantees, you might want to look into the type systems of, say, Clean or Haskell.

Java is almost untyped by comparison. Languages like Ocaml and F# occupy a middle ground.


Much of Java's complexity is cultural and not imposed.

Take the Java web framework, Play, as an example. http://www.playframework.org/


That's true, but the flip side is that the language is tailored to that culture. They love it and it loves them. The culture has grown around Java more than any other language because of Java's features. It's statically typed, but its typing isn't very powerful. It's dynamic, but makes it hard to write the sort of dynamic programs you'd write in Python or Smalltalk. It has methods but no functions. It has anonymous classes but no simple closures.

All of Java's features conspire to create a language that's pretty verbose and doesn't do a lot to make things easier for you (aside from simple object orientation). If you like spelling everything out in excruciating detail, that's good. If not, the language doesn't want to twist your arm, but it's certainly not going to do anything to help you.


I haven't use that blog in awhile, but here you go: http://rowdybytes.blogspot.com/2010/06/learning-to-love-pyth...


To use python you need to download it first, from http://www.python.org/download/

It'll work from Hong Kong, and probably most places in the world, but if you cross the Chinese border into Shenzhen, you'll get "The connection has been reset". Same with other places in mainland China I've been.


You are not doing C, you are not doing C++, you are not doing Java.

This is the best tip that could be given. In the SICP course in Berkeley CS class, the instructor puts it as

"Don't ask me what the language will do if you do something stupid, the answer will always be either you would get an error or the computer will do something stupid. Don't try to convert java code into LISP, try to think in the language."


I had to solve a real problem in lisp before, and I tried, I really tried to think in lisp. But after hours of frustration with the language itself and the interpreter tools, I gave up, wrote python that worked in 10 minutes, tested it, and converted it to lisp by hand. At this point I was left with lisp code that worked, but may as well have been machine code for how readable it was to me.


[deleted]


I hate to get all meta, but as of this writing the parent comment has -2 points. I understand that comments that just say "Interesting :)" are frowned upon on HN because they add no information. However, in this case the commenter is the OP. He's expressing thanks for a lengthy, informative response.

"Thanks for responding, that's an interesting perspective" is not a class of comment that deserves downvotes. It's completely appropriate.

(Update: The comment was deleted as I wrote this.)


I agree, it's probably going to be much more satisfying to give long answers if those can be rewarded with something more than an impersonal upvote. It's a mistake to not let those who ask questions post short "thank you" replies.


I've been using Python since around 1996. Back then I had used, or was still using, C, C++, Turbo Pascal, Delphi, BASIC (groan) and assembler.

Python was so much more high-level than all these other languages that it was staggering. It was pretty weird to use it at first. Where are all the declarations? What are these "lists" and "dictionaries"? You mean I can just stick any old thing in an object or even a class? Etc.

Soon it became obvious how much more powerful this new language was, and I enjoyed replacing reams of Pascal/C code with a few lines of Python. I tried to use it at work too, but failed because it was considered "unmaintainable" by the powers that were (this was around 1999-2000). Of course, this just fueled the feeling of having an obscure but superior language rebelling against the big evil statically typed empire. :-)

Anyway, this used to be some of the appeal of Python. I suppose it still applies to some extent, although since then, Python has been passed left and right by languages that are more powerful, more flexible, more functional, more "fun", and whatnot. Nowadays there seems to be a meme going around claiming that e.g. Ruby is fun and flexible, while by contrast Python is boring and conservative. It wasn't always like that. I distinctly recall people discovering Python and exclaiming that "programming is FUN again!".

Then again, all of these things are in the eye of the beholder, mostly. If you don't like Python, no big deal, especially if you have both lower-level (C, C++) and higher-level (Lisp) languages at your disposal. (Personally, I used to think that Ruby was butt-ugly and uninteresting... these days, I have taken more of a liking to the spirit of the Ruby community, which (perhaps ironically) reminds me of Python's in the 1990s.)


http://tryruby.org You'll probably still dislike it. But at least you can feel more comfortable comparing python to something.

I prefer Ruby, but appreciate Python and Perl. I come from C/C++ coding for 14 years.


Yes, you're a bad person. We're assembling a re-education camp for People Like You right now.


The paradigm shift is that in Python the most direct way is best.

Examples.

In Java, accessor methods are considered good style. In Python, directly accessing a member is better. (more concise, makes client code clearer.) If you need to do magic, you can use properties.

In C++ and Java, you use inheritance to denote which classes have a given behavior. In Java, you also use interfaces. In Python, you simply give a class the needed method(s). (I.e., you use duck typing.) That flattens or removes lots of class hierarchies. If you need to do magic, you can use isinstance(), issubclass() or hasattr().

In Python, you use the built-in types a lot. Many problems decompose easily to lists, sets and dictionaries. That reduces code dramatically. Java and C++ STL's containers, in contrast, are so clumsy they make me look for ways to avoid them. If you need to do magic, you can subclass the built-in types or build new types with the same methods as the built-ins.


Python is a great language. But there is nothing bad about preferring one language/style over another. There are great programmers in all the camps.

Try the other two scripting languages - Ruby and Perl. I would prefer Ruby. See if you like the way things are done there.

If you find that you are more at home with this language than Python, try analyzing what makes you prefer it over Python.

I would suggest you to start with the basics of the Ruby language, and then try building a simple web app with Sinatra. I find the Ruby style much more comfortable than Python.

But if you don't like any of these languages, its still well and good. If you get can shit done with C#, thats all it counts.


A lot of us don't like Python. It's a language with a mediocre design, a somewhat frustrating syntax, and a stubborn refusal to commit to such essentials as real closures and lambdas.

It's basically a language written by a group of people who don't trust you (as a Python user) to be knowledgable and educated. Decisions have been made based on the assumption that you are bad at your job.

Now, everyone seems to comfort themselves with the notion of making it “easy for maintenance”; repeating “I am smart, surely, but not everyone can be as smart as I am.” But this is ultimately a dodge that feeds into the problem endemic to our industry: a disdain for everyone else and everyone else's methods if they differ slightly from our indoctrinated best practices.


> It's a language with a mediocre design, a somewhat frustrating syntax, and a stubborn refusal to commit to such essentials as real closures and lambdas.

I agree about closures, but why lambdas would be so much better than named functions (already available)? I think named functions help code readability, which is among Python's top goals.


> I agree about closures, but why lambdas would be so much better than named functions (already available)?

Because sometimes you want to conditionally create functions on the fly. Named functions are a very awkward way to do this.

It is unfortunate that we even have to have this argument. Anyone with significant experience in functional programming can tell you how useful unnamed functions are.

And it's important to realize they are not necessarily anonymous in local scope. They're anonymous in the global scope.

> I think named functions help code readability, which is among Python's top goals.

Anonymous Lambdas need not be a detriment to readability. Why do you think it would be otherwise?


You owe it to yourself to learn at least one dynamic high level language. The advantage of Python (and Ruby, which I prefer) is that, for the right tasks, it's an extremely productive language. Compared to C++ or Java you have to write far, far less code and you can usually find good libraries that reduce the amount of code you have to write even further. Python doesn't have the elegance or flexibility of Lisp, but it's an extremely practical language for getting anything done that doesn't need C++ performance.


curious what you prefer about Ruby. i'm very familiar with python, and have only looked at Ruby long enough to decide (mistakenly?) that it wouldn't do anything for me which python doesn't already.


Python and Ruby are very much alike but I find Ruby more ergonomic. It's more consistently object-oriented, which means I spend less time looking things up because the core is easier to remember. The syntax is more expressive, which makes my code shorter and easier to read. The consistent use of blocks everywhere is more intuitive and consistent than Python's mix of loops and comprehensions. I find I can take in a block of Ruby at a glance. Python requires closer inspection. Everything in Ruby is an expression that returns a value whereas Python has statements and expressions and it's up to you to remember which are which. Ruby feels to me like the product of an astute student of languages that borrowed the best ideas from Smalltalk, Lisp, Icon etc. The design mistakes in the first versions of Python, mostly now corrected, suggest to me that Guido didn't do his homework as dilligently.

Perhaps I'm way off here but the languages seem to me to show some of their cultural heritage. Python has a blunt & minimalistic Dutchness about it while Ruby has a Japanese elegance and style.


That's about it - Python has a richer, more refined set of libraries to call on than Ruby. (Ruby also has a ton...)

Ruby is more fun,but python is more mature. In the end - whatever gets the job done right?


You gave a great comment. I get almost completely the opposite take on Ruby versus Python though:

There are a number of things that made me think Ruby was made by a less astute designer. The 'end' everywhere feels bad in a world where I know both "}" and a newline can accomplish the same thing but with less typing and screen clutter. Also do not like Ruby's dropping of ()'s from fn call statements, because it blurs the boundary with non-calls and non-expressions and control flow keywords. I also feel Ruby encourages a certain kind of DSL creation which makes the code harder to read and maintain. I also don't think every app or library needs to export a DSL into client space. It's better if the interface it publishes still looks and feels like the prog language used.

That said, I acknowledge Ruby has good qualities. Python is not perfect, but to me it gives the better designed feeling. Actually the Lisps seem to be the most elegant and powerful. Love their regularity and power but not (yet) all the parentheth everywhereth.


Ruby certainly has some rough edges. I don't find the redundancy of end/} a big deal in practice but I do wonder what Matz was thinking when he put them both in the language. The omission of parentheses from function calls can occasionally add some confusion but it makes uniform access to properties transparent without the property hack that Python had to add to accomplish this. It seems like Python has had to add a lot of new, non-orthogonal stuff over the years to catch up to what was elegantly possible in Ruby from the beginning.

People sometimes do go overboard with DSLs in Ruby too but the big problem I have with Python code is that it all looks the same. The shape of Ruby code is usually a good indicator of what it's doing, partly thanks to DSL. Python seems to have gone just far enough in making all code look the same to defeat quick visual intuition. Why not take it a step further and get a completely regular, homoiconic syntax and all the Lisp-y benefits it brings?

To me Ruby hit the sweet spot exactly between Lisp and Perl. If I had to code in Python I wouldn't hate it though. There are far worse languages.


Ruby is incredibly flexible, whereas Python is not. Often times my idea of the way things should be done is not Guido's idea, and so Python seems a bit lacking.

I really enjoy the pretty simply object model, the message passing OO, and blocks.

An interesting point that I read somewhere the other day, from someone who codes in a lot of both: Ruby (to him) is kind of ugly, and Python is very pretty. Yet, the exact parts that make Ruby ugly allow it to have _awesome_ libraries. Python, with its rigidity, means that the core language is great, but the libraries end up being worse because of it. He uses RSpec as an example, specifically. So for him, Ruby ends up being better to use, even though he likes the Python language better.


> Ruby is incredibly flexible, whereas Python is not. Often times my idea of the way things should be done is not Guido's idea, and so Python seems a bit lacking.

Apropos: Does Ruby optimize tail calls?



Thanks, I did some searching and found this, too, after I wrote my comment.


All the popular scripting languages "do" the same things. But I prefer Ruby to Python (while still liking Python) because it feels more programmer-friendly.

    Ruby: a = %w( apple pear orange )
    Python: a = [ "apple", "pear", "orange" ]

    Ruby: a=%q/O'Malley says "Hello"/
    Python: a = "O'Malley says \"Hello\""
Python lets you use "'" or '"' as the delimiter, which solves 80% of these cases - but why not generalize the concept, as Perl and Ruby did?

In Ruby, the last expression in a function is the return value; in Python you must use the return keyword, even though there's no ambiguity.


While it's true that "last expression in a function" is not technically ambiguous, because of python's statement/expression distinction leaving off the return can lead to unclear code.

For example, if there's a function composed entirely of print statements, the last expression evaluated will be the expression in the last print statement executed. But, that means you have to know that 'print' is a statement and not an expression, and that the right-hand side of the print statement is in fact an expression.

In ruby, actions like prints and assignments are side-effects and always evaluated as expressions. So if puts is the last expression in a function you'll get the return value of puts (which is usually nil I believe). That's pretty simple.


    a = '''O'Malley says "Hello"'''


My usual approach to the first one in Python is:

    a = "apple pear orange".split()


Given a = [1, 2, 3, 4, 5]:

    Ruby:    a.map{|i| i+1}.reject{|i| i%3 == 0}.map{|i| i*i}
    Python:  [i*i for i in filter(lambda i: i%3 != 0, [i+1 for i in a])]
Please ignore the fact that the whole operation can be simplified mathematically - nontrivial map-grep-map operations do occur.

I find the Ruby version clearer because it proceeds from left to right like a shell pipeline.


I see your point, but I think your Python's not terribly idiomatic and that's a big part of the problem.

This is easier to read and understand, only goes through the list twice, and loses nothing in terms of power:

    [j*j for j in [i+1 for i in a] if j%3 != 0]
(And for any given operation, there's very possibly a cleaner way to abstract out the inner list comprehension, which would again make it a lot nicer.)

In general, I don't see much of a reason to use filter/map/etc in Python: weak lambdas mean they're not terribly powerful. List/sequence comprehensions can do everything they can do with cleaner syntax and/or fewer operations.

This, I think, is actually at the core of this whole discussion. The Ruby/bash approach makes sense if you're used to working with sequences like Ruby/bash do. The Python approach is more natural to me though, because I've written a lot of Python. (And having spent the last year writing a lot of Ruby, I still find the Python approach cleaner/easier to understand at a glance.)


  [(i+1)**2 for i in a if i%3!=2]
Goes through the list once, and reads like set notation! (and does i+1 once, which is what I assume you were going for with the separate [i+1 for i in a])


I kept the i+1 separate because the parent noted that this was a trivial example and wanted to make a point about the more general map-filter-map operation. I wanted to make the point that you can do the map-filter-map in Python more succinctly and more efficiently without sacrificing any power or flexibility.

In this example, yes, it's easy to solve the problem with only one iteration through the list. In a more complicated example (especially when the first map step is expensive and you really only want to do it once) this kind of solution may not work.


In Perl 6:

  @a.map({ $_ + 1 }).grep({ $_ % 3 != 0 }).map({ $_ * $_ })
Alternately, using the feed operator ==>:

  @a ==> map { $_ + 1 } ==> grep { $_ % 3 != 0 } ==> map { $_ * $_ }
And, if your code being readable to people who've never used Perl 6 and know nothing about it isn't a concern, you can use the much more concise Whatever star notation for closures:

  @a ==> map * + 1 ==> grep * % 3 != 0 ==> map { $_ * $_ }
Using Whatever for the last one would neither work nor be readable(although you could do "* 2", but that's still suboptimal for readability), since the last "" in " * " would be interpreted as a second parameter.

If the preceding notations for anonymous functions have been too implicit for you, then you can use pointy blocks(or just plain anonymous function declarations).

  @a ==> map -> $i { $i + 1 } ==> grep -> $i { $i % 3 != 0 } ==> map -> $i { $i * $i }
Or, you could use hyper operators instead of the maps:

  ((@a >>+>> 1).grep: {$_ % 3 != 0}) >>**>> 2

The hyper-operators extend the scalar operators "+" and "
*" to lists. Since the ">>" point to their operands on the right side, that operand will be extended to be as long as the other side, allowing you to use a single scalar as the right-side element.


I think this is a great example why Perl 6 highly intrigues me but at the same time scares the crap out of me. Thanks for sharing.


I find the Ruby version clearer because it proceeds from left to right like a shell pipeline.

This is an excellent point. Ruby code can be some of the cleanest, most readable functional code there is. I have these kinds of chained pipelines in my Ruby code all over the place and they're much easier for me to follow than similar expressions in Lisp or Haskell and than Python's comprehensions.


Oh, I want to play! Here's Perl:

    map { $_*$_ } grep { $_%3 != 0 } map { $_+1 } @a
Interestingly, in this particular case Perl seems to have way less syntax than ruby or python, which I find rather ironic. It does have to be read backwards though, because of the syntax of map and grep...


I am a day-to-day C programmer. I am surprised to hear Python did not fit well after C for you. In my view, Python is the like the easy scripting version of C. I first learned Tcl/Tk, then Perl, then looked at bash scripting (my god, horrible) and seen that the earlier three has some obscure syntax with some special meanings of $'s and @'s and hyphenated flags everywhere with no resemblance to C.

Then looking into python, I saw the same philosophy as C; A struct or an array? Use a nice dictionary, set, tuple or a list, nicely kept under the [ ] brackets. Throw them in for loops without the iteration integer. Its just all the same things I do in C but easier, with no cryptic idioms. That's why I love it.

I have 2 choices of languages for all my projects: C and Python


What do you have against bourne shell scripting? I thinks it's probably the most under-rated language/programming paradigm I know. I really don't understand all the hate. Also, what didn't you like about Tcl? I find it much more in-line with the C world-view than python is; It's C-interface is a lot cleaner than Pythons as well.

My language-choice algorithm is:

sh + unix DSLs -> Tcl/Tcl+C -> SML/Haskell/Scheme -> C -> *

Where '*' is whatever language has a library I need, or a language that I'm required to use.


Agreed. And interesting you said C and Python.

A few years ago I was thinking about what would be the minimum set of languages that would be good to know in order to be able to do almost anything you wanted, and do it well, and to always be employable. I ruled out 1 language alone because of the C phenomenon: there's a whole class of software where today C is the best choice, and a whole different set where C is a bad choice. Therefore, the ideal set was 2+ with C being one. I ended up settling on C, Java and Python. Java for enterprise/BigDumbCompany work, but also because there's a lot of great tooling and innovation happening in the Java space more so than other langs. Python for the role of glue and prototyping and high level no boilerplate bureaucracy but where you could still trust every developer using it. Ruby could probably fill that role decently instead, except be a little better or worse in diff areas.

C, Java, Python

Then I found out that Google had picked these three as well. Nice!


No, Google picked C++, not C. Very different philosophy!


Thanks, looks like you may be right. I just found a post by Steve Yegge in 2007 where he said it was C++, Java, Python and JavaScript. I'm pretty sure I found references elsewhere to C/Java/Python though. Close enough.

I had forgotten about JavaScript, I guess I considered that one a no-brainer on the browser side. However, the line blurs between C and C++, especially if you use the extra stuff only tactically as needed rather than a hard-core-always-OO-and-templates approach.


Btw I considered C++ for the C slot. But decided if I really wanted 1st class OO support in a language that Java or Python could do that acceptably, and it wasn't worth the price of dealing with what I felt were C++'s design imperfections. So it fell back to:

* C for (somewhat) bare metal programming (OS, embedded)

* Java for high-level with rubber edges and safety wheels, esp huge codebases in enterprise IT shops with Joe Blub coders

* Python for high-level agile w/solo or small elite teams (calling out to C bits where beneficial)


I started off hating Python as well. I could not stand a language that didn't terminate statements without semicolons! :) And python just seemed a lot like Perl; when using both languages for large projects my code just seemed to degenerate to "hacky."

But I've grown to like the simplicity of Python. One thing I don't like is its threads implementation, however. Because of the GIL, multithreading (GUI + background thread) just doesn't work right. Maybe I'm using it wrong?


The Right Way for a long time was to run multiple processes. If you absolutely needed threads, then the Other Way That Is Less Correct involved launching threads written in C from a C extension

The Python community seems to be warming to the importance of threads. There are several efforts breaking their swords against the GIL nowadays (Unladen Swallow being the most prominent), and Antoine Pitrou wrote a new GIL to help improve contention [1]. With any luck, by the end of the Python Feature Freeze, there will be some significant headway towards efficient handling of threads within Python, but I'm not holding my breath yet ;)

[1] http://www.dabeaz.com/python/NewGIL.pdf


If someone were to submit a patch nuking the GIL, I doubt it would be a language change, ergo, the language moratorium (a block on new syntax) does not apply.


I agree. To clarify, I'm hoping that because they're not focusing on language enhancements, they might make some major headway towards nuking the GIL


> I started off hating Python as well. I could not stand a language that didn't terminate statements without semicolons! :)

I have exactly the opposite heuristic for syntax. The only thing worse than semicolons and curly braces, are `begin' and `end'.


> I like C++ too

I never met a hardcore C++ guy claim they like C++

> by hardcore I mean the kind of people who wrote Exaile

I think Python weren't meant to write desktop music players. Stuff like Zope are hardcore.


I never said I was a hardcore C++ guy. :)

I'm more like a hardcore C guy.

I do know a fair bit of C++ though. Perhaps C++ would've been better if compatibility with C was not kept as a fundamental goal. Just a personal opinion.

Qt does an amazing job of taming C++. I used their framework for some GUI design and loved it.


I think what he is saying is, if you program enough C++, you'll grow to dislike it.


The biggest revelation Python gave me is peoples thought process/problem solving process/whatever you want to call it (I call it brains) work differently. And importantly peoples brains are wired to work really, really well with specific language qualities(dynamic, functional, boilerplateish) as manifested in specific languages. But really poorly with other qualities/languages.

So, no you are not a bad person. If you've given Python a honest attempt (took me a year to do that, I had whitespace hangup that was hard to get past), then you aren't missing anything either. Python just isn't the language for you. Move on, be happy.


You don't have to like Python. I don't.

Of course, I don't have any pretentions of being a "good person" either.


I don't much like it either.

Came from C++, learned Perl, got a job writing Fortran, learned Lisp. When hating on Perl came into vogue on the internet, I said to myself, okay, I will learn Python as a Perl replacement. It is a decent Perl replacement, what with all the libraries and the fact that it is shebang-able.

Now that I've dispensed with background, what I don't like about Python:

1. Assignments don't return values. I hate this. An example. I would like to write:

    if (m = re.match("woo, regular expression")):
        do some stuff with match object
Instead

    m = re.match("woo, regular expression")
    if m:
        do some stuff with match object
This obscures the fact that I only care about this match object if it evaluates to true.

2. List comprehensions kind of suck. [(a, b) for a in as for b in bs] is a cartesian product and not a zip? Really?

3. Loops don't help this either: There's no equivalent of the Lisp (loop for x in xs for y in ys do (stuff x y)). I have to explicitly loop over the index. This pisses me off.

4. I feel that I am generally shoehorned into creating and assigning more objects than I would like.

Honestly it seems kinda crippled in comparison to Lisp, which, by the way, runs faster.

I would go back to Perl if multidimensional arrays were not so godawful. Perhaps I should try Ruby.


Perhaps `zip' could help address some of your points?

> 2. List comprehensions kind of suck. [(a, b) for a in as for b in bs] is a cartesian product and not a zip? Really?

Strange. I would say it sucked if it was the other way round. You can use the `zip' function, if you want a zip.

  zip (as, bs)
> 3. Loops don't help this either: There's no equivalent of the Lisp (loop for x in xs for y in ys do (stuff x y)). I have to explicitly loop over the index. This pisses me off.

Try the function `enumerate' and `zip'. E.g.

  for x,y in zip(xs, ys): stuff (x,y)
If you need an explicit index for your stuff:

  for (i, (x,y)) in zip(xs, ys): stuff (x,y)
> 4. I feel that I am generally shoehorned into creating and assigning more objects than I would like.

I agree, only the other way around. Python is much too destructive and relies on in-place updating. E.g. recently we got `sorted' that doesn't sort in-place but return a sorted array --- but the `shuffle' function still works only in-place. Also dicts are destructive. There should be a persistent (i.e. non-ephemeral) data structure for key-value-mappings in the standard library.


I was in the same boat as you before, but it is now my "go to" language. It happened when I started using django. Just humor yourself and go through this tutorial:

http://docs.djangoproject.com/en/dev/intro/tutorial01/

Don't copy and paste the code in the tutorial and when they ask you to type something in the REPL, do it.

That's when it all started for me.


I was looking for a smooth way to ramp up my Python (very weak). Just had a little experience with it from the friendfeed api v2 on the app engine.


"My question to the hardcore python lovers is this (and mind you, by hardcore I mean the kind of people who wrote Exaile, not the kind who hack up a Python script every now and then to organize their iPod)"

I'm disqualified to answer - as I come in your so called second type (softcore python lover perhaps), people who uses python to fix things in day to day life.


I like Python because, IMO:

* It's the most concise, expressive language I know * Friendly syntax: I just write what I'm thinking and it pretty much works * Most importantly: It includes a ton of functional programming goodies (of which I make liberal use), but I can still just say x = 1 when I want to. Best of both worlds.


Do you hate scripting languages in general? Maybe you should compare python with other scripting languages.


I really dislike the term "scripting languages". While you could describe Python and Ruby this way, it does them a huge disservice, implying that they aren't capable of large scale application development. I think this term should really be reserved for Bash et al.


I read "scripting language" as a legitimate warning about tradeoffs that make it unsuitable for high-volume use, because the code is permitted to change so much runtime behavior that most known optimizations (and other kinds of static analysis!) are ruled out. For example, Twitter's Ruby to Scala migration made news, so there's value in being able to broadly say what's different between those kinds of languages.


I don't have much experience with scripting languages - I've only written some bash scripts to automate stuff (other than Python, of course).

Unless, of course, Brainf*ck counts as a "scripting language". :D


I still write most of my code in C. Could never really warm to C++ and Java was fun in the early days, but the sheer volume of API's, J2EE, et al eventually crushed my spirit.

It took me several goes to become friendly with Python. I find it excellent to try out algorithms. If I don't know how something is going to be best implemented, then writing a bunch of classes in Python and tweaking attributes and methods until the algorithm is as simple as can be helps me get there very fast.


I didn't warm to Python until v2.2. Took an article by Eric Raymond for me to take another look at it and it finally clicked. Great for trying out ideas, etc. But I still write all the heavy duty stuff in C. Its really cool to do the heavy lifting in C and expose it as a shared library and then do the coordination stuff in Python.


Programming languages all come with a specific mental model for how you're supposed to code in the language. That's why it's so useful to learn different types of languages so that you "pick up" those different ways of thinking. Everybody thinks differently so not every language is going to suit the tastes of every developer. You may find that Lua or Ruby are a better fit with your thinking patterns and that you are more productive using one of those as a scripting language of choice.

The power of scripting languages really come down to a very simple thing: each line of code in a scripting language is equivalent to roughly 5 to 10 lines in a systems programming language. You're doing the same mundane programming tasks but with fewer lines and getting nearly the same performance as if you had written the code in one of the systems languages. It really won't make much difference which one you use, you'll see the same productivity gains using Perl, Ruby, Python, Lua or practically any of the others. However, the more obscure the language you choose, the fewer developers and the fewer available libraries to use.


I hate Python too, only because of its insistence on indentation and because of its verboseness relative to, say, Perl.

I'm not against the non-Perl philosophy of There Is Only One Way To Do It, but for God's sake, don't insist on indentation! I am under a personal vow that until Python lifts its indentation requirement (say, via a command-line switch), I'm no way using this language for my programming needs.


I've never understood this objection. Do you not indent your code anyway? If so, why should you also need to delineate scopes in another way? Python allows you to choose your indentation, it just requires consistency: something that is always required by coding standards (with good reason).

If there was a command line switch, would you use it? Why?


Python's whitespace indentation seems like a great idea until you realize that it makes things like copying & pasting code from web pages etc far more difficult and, worse, makes automatically reindenting a block of code in an editor impossible. If I come across a badly formatted block of code in Ruby or Java or C in emacs it takes me one keypress to sort it out. In Python I have to carefully, manually look at each line.

It's also fun to spend an hour looking for a bug caused by somebody inserting a tab instead of a space somewhere. We had a production crash that took us a long time to fix because somebody hotfixed a file with the wrong editor and did this.

I also don't understand how forcing people to indent code in the same way magically makes people write good code. The kind of programmer that can't or won't indent his code properly is introducing bugs far more serious than bad layout into your project.

Python's a nice language in many ways but I consider the syntax to be a bug.


It's also very annoying to have to un-indent an entire block of code during debugging just because you want to comment out the conditional statement around it...


You could do what you say - comment the conditional:

    # if x:
    if 1:         # DEBUG
        block


Or, even quicker and dirtier:

    if 1: # x:
        block


> it makes things like copying & pasting code from web pages etc far more difficult

This. I'm convinced half the reason Ruby is displacing Python is that it's actually practical to talk about Ruby code in forums without ridiculous little-known workarounds like pastebins.


I do think that it's a pretty major issue, although I also don't get the impression that Ruby is displacing Python. If anything it seems like the momentum from Rails is petering out a bit.

The Ruby community itself does strike me as more energetic though. All the activity on github etc seems a lot more impassioned to me than the coolheaded & practical world of Django et al.


I consider the indentation approach excellent because it's so clear and readable. For exactly the same reason, I prefer YAML over XML, because the result is something more readable, something whose structure is understandable at a glance.


I'm willing to forgive the creators of Python if they didn't optimize their syntax for copying-and-pasting code.

Hotfixing is a naturally perilous process regardless of language. I've spent hours tracking down untested hotfixes in Java.

You're also probably less likely to come across a badly formatted block of python code because that would generally result in a bug. Much like misplaced brackets in Java/C/C++ would result in a bug.

It doesn't make the code magically better. It makes it less redundant. Indentation should already be there, why do you need curly brackets?


I'm willing to forgive the creators of Python if they didn't optimize their syntax for copying-and-pasting code.

Copying and pasting is also known as refactoring. A syntax that makes that much harder than it needs to be is mis-optimized, IMO.


But code is read far more than refactored, so I think it's an okay tradeoff. (Plus what editors are ya'll using where changing the indentation of something is difficult?)


I do indent my code, but I prefer to auto-indent. Using indentation to determine if a line of code falls inside or outside some logic block requires a lot more effort for me. Would much rather just use curly braces. Curly braces also make copying and pasting code blocks much easier.


I use auto indent on python all day. Not having to perform secondary bracket mark up is great.


You don't, because auto indent for python doesn't and can't exist.


You say this, and get points, but I'm not sure what you mean. I've written Python and gotten each new line to automatically indent to the proper point, with a functional backspace that unindents the appropriate amount. I've then copy-pasted blocks with several different indentation levels that were automatically adjusted to the surrounding code, with the possible exception of an error corrected by a single keypress.

To my knowledge, this is no worse than the expectation for a braced language, but much better than the worst case for a braced language.

If you mean something simpler than I've taken you to mean, e.g., the conversion of a whitespaceless C program into a whitespaced one, I concede you are mostly correct (';' still works in Python), but I must admit I would still not yet see your point.


If you write code like this:

  if x:
      # do stuff
  # do other stuff
Your indenter will not know where to put the third line.

In languages whose blocks are closed using nonwhitespace the problem doesn't exist. Your indenter just has to push everything as far to the right as it can, and problem solved.

It's nice that your editor gets copy and paste right, but it doesn't and indeed can't decide how to indent your code, something that works in every other language I've dealt with.


Why does that matter?

In brace language:

if (condition) {{ code_1 }} code_2

In Python:

if (condition):\n code_1 \n^H code_2

Unless your editor also knows when to place your close braces, and, haha, it doesn't and indeed can't, you have not even won a keystroke.


Once I've placed my closing braces, I'll never need to think about it again, unless and until I restructure that piece of code.

It's got nothing about saving keystrokes.

And now, back to working on pass-infested Python code.


Vim does pretty well. On starts of blocks (when I hit : followed by return) I get an indent. At the usual places that would end a block (return, raising an exception, break or continue) I get a dedent automatically.


Are you sure? A counterexample: My editor indents every line that I end with a colon (:). So I just type : and <enter> and have a new indent. When I want to unindent, I just type backspace to go back one tab. This is simple to implement and understand, and with python, it's really all I need.

This kind of "automatic indentation" (colon-enter to indent, a quick backspace to unindent) feels much more natural to me than a C-style indent, where the editor's cue to indent would be space-rightbracket-enter and leftbracket-enter to unindent.

True, one cannot come accross a completely unindented block of python code and ask the editor to format it correctly for them. But by definition, unindented code is not valid python and will not run. This is not unlike C / C++ / Java code that's missing all its braces, which would not run either.


I think he means in his editor. I do the same, it works well.


Whitespace doesn't help you write code. I helps you read code, which is a lot harder writing it, even though it seems like it should be the other way around.

The pain of keeping indentation correct is a forced ongoing investment in readability that is well worth it--at least to me.

I'm glad that haskell also has whitespace rules.


I indent my Perl code. Not everyone does, however. It's a trivial task to reformat it in an editor. I guess the only complaint I might have is why bother thinking about indentation, ever? Throw in a couple of {} and now the tools can provide additional help.


The point is, I've never met a programmer who doesn't think of indentation. I've never seen a (real) project which doesn't have indentation. So if everyone is doing it anyway, why not take advantage of it and make it a part of the language? This way programmers learn to use indentation from the start (which any real programmer uses anyway), and indentation is always correct, nothing to ever think about.


It makes it harder to correctly do operations on code as text, eg, inserting a chunk of code from elsewhere or moving it out from or under an "if" statement. I have trouble with this occasionally working in emacs at least.


This does get to be a problem with people not indenting consistently. I.e., some code using tabs and some using spaces, and then copying between them.

Technically, your editor should take care of that for you, but some of them don't do it properly (and sometimes can't do it properly), and it really sucks.

Well, everything has a downside :)


Shouldn't tools take care of that before a commit to svn, git, etc? Java has formatters, the Go language has one.

Do any exist for Python or Perl?


Take care of indentation, you mean?

1. People don't indent consistently. There are a lot of (imo, pointless) arguments about where to place the braces, etc. One of the advantages of the language forcing a style is that arguments about it are no longer necessary.

2. Why should tools be taking care of this? People indent to understand the code better. Everyone agrees that you have to do it. So what's the problem with the language forcing you to do it? I'd rather it be part of the language than an external tool enforcing it.


I indent my Perl code. Not everyone does, however. It's a trivial task to reformat it in an editor.

But in Python, you don't even have to worry about other people not indenting their code!

I guess the only complaint I might have is why bother thinking about indentation, ever? Throw in a couple of {} and now the tools can provide additional help.

When you edit Python, the tools can already help as much. The start of a block is obvious (def, if, for, etc), so tools indent for you. To dedent, just hit the right key (e.g. a backspace on an empty line in emacs) -- no harder than hitting the brace key.


You do have to worry as their Python code produces errors if they do not indent it properly!

I don't like Python either and prefer Perl. (In Python I did not like indentations, dicts, tuples, lists, and the IDLE gui.)


First, to the topic question, I'm not qualified by your criteria to answer the question, but I'll say anyway that I prefer doing video game programming in Python for 99% of the tasks that come up compared to C++. (For example, the PyGame wrapper for SDL just feels a lot better than the C++ interface, plus it adds some nice features.) My own take on mental models can be simplified with the example of strings. In Python, I can think of a string as a string, or a word, or a sentence, or an iterable list of characters. In C, I can't ever forget that it's all pointers. Python lets me think at higher abstractions more readily from looking at code. Plus I really love dynamic duck typing.

And to the parent: I really hate to be "that guy", but can you give me an example where non-indented code looks better than indented code (since all your complaint really is about is style rather than what Python can help you do)? If it's just braces you're after, Python "supports" those by just commenting them out. When I first learned Python years back I was coming from a PHP and some Java background, and my code was sometimes single-space-indented, sometimes not indented at all, and it was hideous. After I realized I should be indenting anyway, Python made a lot of sense, all these languages with braces or 'end's everywhere didn't, and I can't think of any case I've ever thought "Man, I wish I didn't have to indent here." The one thing I can say I don't exactly like about Python now is the occasional self-hell.


Sometimes I have trouble when breaking looong lines of Python code in multiple lines ... like for example when I have method chaining. I don't like long lines of code, it makes the code unreadable.

Of course, most situations can be solved by a backslash-enter, and simple statements can be separated by ; but there are places where I wished for more flexibility.

This is only about readability, and overall Python code tends to be more readable than in other languages, although I've read beautifully written code in Java or Perl or C++, and the readability really depends a lot on the programmer writing it.

I think the major problem is the lack of focus in high-school / college on such things ... quite the contrary, many teachers are usually disconnected from real-world projects and don't have this skill themselves.


Google python style guide has a very good way of dealing with long lines:

http://google-styleguide.googlecode.com/svn/trunk/pyguide.ht...

In fact at Google the maximum line length for python code is 80 characters because long lines make the code unreadable, and following the style guide basically anything can fit in a nice and readable way.


The indentation is not a big deal at all. You indent your code anyway, right? Python doesn't enforce a specific standard, you can indent however you want, just as long as you use the same units of indentation consistently.

The only time I've had a problem with indentation is while copy-pasting things, occasionally that can get messed up and be annoying. Everything else is fine.


The main reason I like having explicit braces and non-significant whitespace is that it allows me to use my whitespace flexibly to clarify the structure of a program.

There are some interesting ideas regarding the use of whitespace in C here:

http://www.quinapalus.com/coding.html

I don't agree with everything he has to say- for example, I think his nested for loop example would be far nicer as something like this:

  for(i=0;i<10;i++) {
  for(j=0;j<10;j++) {
  for(k=0;k<10;k++) {
  	printf("%d %d %d\n",i,j,k);
  }
  }
  }
(Not to mention more in keeping with the principle of "alignment".) Still, if nonstandard indentation can make it easier to see the patterns in your code, it benefits both development and maintenance.


Would it not be better this way:

for (i = 0; i < 1000; i++)

     printf ("%d %d %d\n", i / 100, (i % 100) / 10, i % 10);
IMHO, such patterns (multiple nesting with empty outer loops), in most cases, are really an indication of just one real loop. Using multiple counters might also be an option.

In case the loops are doing something different, they should be indented separately. For instance, the following makes little sense:

for (i = 0; i < 10; i++) {

foo (i);

for (j = 0; j < 10; j++) {

     foo (i + j);
for (k = 0; k < 10; k++) {

foo (i + j + k);

}

}

}


I absolutely agree with your second example, but I think you're missing my point for nested loops. Perhaps this would be a better example:

Let's say I'm making a simple tile-based game that uses a two-dimensional array called "grid" to store the map. To draw that map to the screen, I might do something like the following:

  for(int x = 0; x < grid.length;    x++) {
  for(int y = 0; y < grid[0].length; y++) {
  	drawTile(grid[x][y], x * tileWidth, y * tileHeight);
  }
  }
There are other ways to refashion this as a single loop, (storing everything in a 1d array and handling the dimensioning ourselves, using an iterator and giving tiles a position attribute containing their (x,y) coordinates, etc) but is any of that really more straightforward?

Again, I am not arguing for indentation anarchy, merely that there are times when it can help to bend the rules.

edit: I imagine you'd prefer something like this?

  for(int t = 0; t < grid.length * grid[0].length; t++) {
  	int x = t % grid.length;
  	int y = t / grid.length;
  	drawTile(grid[x][y], x * tileWidth, y * tileHeight);
  }


The only reason I have had lately to get annoyed at Python is when one of the guys I'm working with on a project was using an editor where the tabspacing wasn't set correctly. That meant anything that he changed/added led to the script not working. Even though this was an easy change (=G in vim), it's still annoying to think that such a minor issue would completely kill the script.

Of course I prefer Ruby, but that's a whole other can of worms :)


In C and C-like languages, you have indentation for the humans and curly brackets for the compiler. Python simply unifies this into a single convention. Both parties "see" the code in exactly the same way. If you see someone else's code, there's never any ambiguity about what they meant.


I'm glad I don't have to read your code. But I don't believe you should be downvoted. Only reformed. ;-)

BTW in haskell whitespace is optional but nobody write code that way.


Before I ever encountered Python, I'd gotten pretty familiar with a wider variety of languages than C, C++, and Java.

So if you don't have to learn Python right now, maybe you should first learn Lisp, Haskell, or whatever else you want or need to learn that isn't much like C/C++/Java. I think you'll have more fun this way. :-)


I think an exhaustive answer is "mu". Focusing on the tools instead of the problem is bad.


Easy language to read, write, and maintain. Big standard library and lots of tools. Win.


If you ignore the fact that this page is a bit biased towards python, it is a pretty good intro to Python if you're coming from a C++ or Java background: http://dirtsimple.org/2004/12/python-is-not-java.html

My personal experience is that the Java-ish languages encourage you to overthink problems a bit. Instead, I suggest that you just do what comes naturally. This will actually get you pretty far, as Python doesn't have that many gotchas (at least in comparison to Java or especially C++).


[deleted]


What's good about C++ is that I know it well, so it's comfortable for me. Why someone just coming to it now would like it, I couldn't tell you.

I will say that when I came to Python I felt much lighter and faster as programmer. You forget how it can be.

And starting into Clojure recently just feels more "right". I avoided Lispiness for a long, long time, but Clojure seems to hit the proper balance for me.


Languages fit you if your tasks are fitting to those languages. As far as I can see as a C/C++ programmer you should have been involved more lower-level issues rather than writing software standing on several frameworks. Java is not a toy language but it handles the stuff you were used to deal with almost automatically so it feels like it is.

So what kind of programmer are you? What do you develop? These questions are essential before making any advise.


Currently I'm a student and question of what kind of a programmer I'm is wide open. I am currently working on a project that has to do with the GC and the JIT compiler for a language runtime. I plan to use my (remaining) student years to gain as broad a programming experience as I possibly can, not because I wish to be good at everything but so that I don't miss anything out. The whole point of this post was to find out that since virtually everyone finds Python so bouhaha and I don't, am I doing something wrong?

Secondly I don't mean to criticize Java. By calling it a toy language, I was simply referring to the fact that Java tends to make writing bad code difficult, and in doing this takes away some of the flexibility and power that you tend to associate with other languages.

I mean

while (str++) str = (str) + 1;

becomes

for (int i = 0; i < sb.length (); i++) sb.setCharAt (i, sb.charAt (i) + 1); // and relevant casting

While the above piece of Java code can be improved and condensed (for instance, by calling getChars and iterating through them* and then creating a new String - would be slow however), I think you get the point.


It seems some of the HN eats up some of the asterisks. But you get the idea.


Python's alright, but it made me work too hard for what it does. I tried hard to like it but using Python for writing scripts I was constantly asking it, "why are you making me do extra work? Perl doesn't make me do half the crap that you ask of me" (ex. regexes, quoting, string interpolating). Also, Perl has the CPAN, so, I fired Python and use Perl.


Preference for languages can take into account everything from the syntax to the libraries to the community. It's not surprising when someone has strong opinions about them. The problem is when people get precious, but you seem to be straight up about it. And with C and C++ you certainly won't suffer because you aren't a Python guy.


Python people are too busy solving real world problems fast and quick and getting their shit done, to fall in love with python.

Tools are just tools. And I don't mean to come across as a jerk. I love developing. But that is as a means to an end, but not as an end in itself. - Those who like the latter, end up as research scholars; or Rubyists.


Disclaimer: I'm not saying this to start a holy war...

Try Ruby. It gives much more freedom than Python, not to mention countless way of shooting oneself in a foot (which you know already from the C family ;)


If you don't like Python, it may just be an indicator that you know more about functional programming than Guido, which is hardly a feat. Read his opinions about tail-calls for a good laugh.


A programming language is like a screw driver, the brand doesn't matter but what you do with it!


what don't you like about it vs python? it could be easier to respond to some specific issues.


Yes


yes


I don't like Python, either. There's nothing wrong with disliking Python - there's lot to dislike. There's also a lot to like, for a certain kind of person - e.g. not you and me.

It's an aesthetic and personal choice.


No.


[dead]


Let's leave the meta-memes on reddit. This is a little bit personal to be funny imo anyway.


Nope, I fucking hate python.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: