I frequently run this thought experiment in my head. The world's smartest chimp is as smart as a young child, maybe. Now imagine an alien race who are like us as we are to chimps. In the same way that we are unfathomably intelligent to chimps (so much so that they don't even understand how far the gap is between us and them), consider how scary it would be to encounter an alien that was that much smarter than humans are.
I think this only works if you consider intelligence only in relative terms. I like to think that humans have at least some "absolute" level of intelligence required to understand that another creature can be more intelligent than us, whereas chimps probably can't grasp that concept. So it'd be easier for humans, I reckon.
I'm not convinced a race that is vastly more intelligent at an individual-level than humans is necessarily a consummately more advanced race.
It would seem like the age of a civilization and the size of its population (i.e. overall "human" capital) could mitigate a relative weakness in individual intelligence. Or perhaps another race's willingness to cooperate in large numbers. Or perhaps the motivation of civilizations to advance (e.g. a civilization that faces many existential threats may advance faster assuming it survives relatively unscathed; or maybe some other intelligent beings would be more inclined or less inclined to pursue civilizational advancement).
I wonder if the relationship between societal/technological advancement and individual intelligence is not linear or even logarithmic; rather, it's important to have "enough" intelligence to engage in abstract inductive/deductive reasoning (maybe the ability to do mathematics beyond basic arithmetic), and after that level other factors such as those mentioned above are bigger contributors to civilizational advancement.
We humans have far surpassed what a single individual can achieve by breaking problems into smaller and more intellectually manageable parts, which should be particularly evident to those us on HN working on software. It seems possible that the biological (biophysical?) limit for intelligence is far outstripped by the amount of complexity in the universe, making the civilizational utility obtained for marginal gains in individual intelligence rather low (i.e. since heavy abstraction will always be needed, the size of the "abstraction chunks" is just one of many other factors). On the other hand, we on HN are also well-positioned to observe the efficiency lost when systems progress to the point where no single human understands them, so I see the counterargument to my own points as well.
It would be fascinating to live in a time where the answer to this question would be knowable, but I don't have my hopes up :)
The one where he ends with "As an educator, I personally value knowing with precision and accuracy what reaction anything that I say (or write) will instill in my audience, and I got this one wrong." ?
I wonder if such a thing as unfathomably intelligent (compared to humans) could evolve. Are humans still evolving or does a certain level of civilization stunt evolution by mitigating selection pressure? Perhaps there's a natural ceiling to intelligence and civilization? Its fun to think about.
As is frequent by the media and animal enthusiasts there is a bias towards emphasizing evidence that fits your desires and beliefs, but ignoring the evidence that does not. The evidence that this chimp could use language via manipulation of "plastic magnetic tokens that varied in size and color to represent words" is weak, and the structure of the tokens made it appear to carry meaning even when randomly selected and placed, according to several experts in the field I spoke to at a conference some years ago on the matter.
I also found the quote below to be weak evidence. Especially because the second paragraph makes it seem like "choosing the correct solution" (of two options) was more of a probabilistic thing. For one researcher she usually chose one thing and for another researcher she typically chose another. Does that mean she was solving the problem for her preferred person and prove theory of mind? Seems weak.
"To have a theory of mind is to be able to attribute purpose, intention, beliefs, desires, and other attitudes to both oneself and another person or animal. In order to test whether Sarah could understand that people had thoughts that differed from her thoughts, she was presented with short video tapes where a human actor in a cage was trying to perform a task, like trying to get some bananas that were inaccessible. After watching the video Sarah was shown two pictures, one that would allow the actor to reach his goal (a box) the other not (a key). She successfully solved the problems for the actor.
But there was some concern that she was putting herself into the position of the actors, which would be a pretty exciting cognitive feat on its own, but wouldn’t show that she attributed attitudes to the actors. So she was presented with more videos, one in which the actor was her favorite caretaker and another in which the actor was someone she didn’t really like. Sarah selected the right responses more often for the actor she liked, and the wrong responses for the actor she didn’t much care for."
Agreed, it seems like there's an equally good argument for the opposite conclusion (that this experiment makes her less likely than we would have thought). Something like: she's good at mindlessly mimicking things, and pays more attention to the video when it's the trainer she likes.
When did nytimes start guessing whether I'm using private browsing? (not to mention they get it wrong, although I can guess why they might think I am using private browsing)
Luckily blocking all 1st party scripts fixes it, and makes the page a lot faster to boot.