Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What scientific phenomenon do you wish someone would explain better?
607 points by qqqqquinnnnn on April 26, 2020 | hide | past | favorite | 813 comments
I've been studying viruses lately, and have found that the line between virus/exosome/self is much more blurry than I realized. But, given the niche interest in the subject, most articles are not written with an overview in mind.

What sorts of topics make you feel this way?




Quantum Computers. Not like I'm five, but like I'm a software engineer who has a pretty decent understanding of how a classical turing machine works. I can't tell you how many times I've heard someone say "qubits are like bits except they don't have to be just 1 or 0" without providing any coherent explanation of how that's useful. I've also heard that they can try every possible solution to a problem. What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse. I guess what I want is some pseudo code for quantum software.


I recommend Computerphile's videos https://www.youtube.com/playlist?list=PLzg3FkRs7fcRJLgCmpy3o....

I had the same "problem" as you. What finally made me feel I sort of cracked it was those videos. The way I think of it now is: They let you do matrix multiplication. The internal state of the computer is the matrix, and the input is a vector, where each element is represented by a qubit. The elements can have any value 0 to 1, but in the output vector of the multiplication, they are collapsed into 0 or 1. You then run it many times to get statistical data on the output to be able to pinpoint the output values more closely.


I don't know if it's accurate (because I never understand anything I read about it) but this is the most concise and clear explanation I've read on this subject to date. Thank you!


It's accurate. QC is just linear algebra with complex numbers, plus probability extended to the complex domain. Why that's useful is something I'm still struggling with as well.


I'd assume it's the speed advantage, but the only problem I can think of that would require that type of exponential speed is cracking hashing algorithms which just seems destructive and counterproductive, like building a digital nuclear bomb - and from my very limited understanding that's a long ways off from being achieved.

I assume there's probably many more complex computational problems outside of my domain that QC can help with. Does anybody know of any?


Aside from Shor's, the other is Grover's algorithm which deals with search in an unstructured database. There are more and more superpolynomial speedups which have been discovered in application of QC. A good enumeration of these is the quantum algorithm zoo.

https://quantumalgorithmzoo.org/


Grover's algorithm lets you search an unsorted list of four elements with just one "is this thing in the list?" query. Classically, of course, it requires four queries.

More precisely, given f: 2^n -> {0,1} which is guaranteed to hit 1 exactly once, Grover finds the one input which hits 1, and it does so using about 2^{n/2} queries of f; but the constants happen to line up so that when n=2, exactly one query is required.


Many problems require lots of huge matrix multiplications—think simulations of physical systems, i.e. weather systems or molecular interactions, or numerical optimization.


Additionally, many problems are can be converted to matrix operations, like graph algorithms.

Note that matrix multiplication takes O(n^2) time with a quantum computer, but O(n^2.807) time on a classical computer.


> but O(n^2.807) time on a classical computer

Optimizing matrix multiplication for classical computers is an open research problem, and according to wikipedia there are algorithms with O(n^2.37) running time. Also according to wikipedia, it is not proven that matrix multiplication can't be done in O(n^2).


Ah yes, we may as well add NP-completeness to this thread.


I think there's no way to understand quantum computing without first understanding some linear algebra, specifically tensor products. How ten 2-dimensional spaces give rise to a 1024-dimensional space, how the Kronecker product of three 2x2 matrices is an 8x8 matrix, and so on. If you're comfortable with that, here's a simple and precise explanation of quantum computing:

1) The state of an n-qubit system is a 2^n dimensional vector of length 1. You can assume that all coordinates are real numbers, because going to complex numbers doesn't give more computational power.

2) You can initialize the vector by taking an n-bit string, interpreting it as a number k, and setting the k'th coordinate of the vector to 1 and the rest to 0.

3) You cannot read from the vector, but exactly once (destroying the vector in the process) you can use it to obtain an n-bit string. For all k, the probability of getting a string that encodes k is the square of the k'th coordinate of the vector. Since the vector has length 1, all probabilities sum to 1.

4) Between the write and the read, you can apply certain orthogonal matrices to the vector. Namely, if we interpret the 2^n dimensional space as a tensor product of n 2-dimensional spaces, then we'll count as an O(1) operation any orthogonal matrix that acts nontrivially on only O(1) of those spaces, and identity on the rest. (This is analogous to classical operations that act nontrivially on only a few bits, and identity on the rest.)

The computational power comes from the huge size of matrices described in (4). For example, if a matrix acts nontrivially on one space in the tensor product and as identity on nine others, then mathematically it's a 1024x1024 matrix consisting of 512 identical 2x2 blocks - but physically it's a simple device acting on one qubit in constant time and not even touching the other nine.


Thank you for posting such a concise description.


What about some kind of interactive simulation, kind of like playing with a graphing calculator? People tend to relate to things better by tinkering and playing with parameters to observe the impact on results. Analog experimentation is how we learned most Newtonian physics.


Several years ago, I gave a presentation on quantum computing to the Los Angeles Hacker News Meetup. The slides are at https://jimgarrison.org/quantumcomputingexplained/ . Unfortunately, there is no video recording so they are currently lacking explanations.

My goal was to explain quantum computing in a way that is mathematically precise but doesn't require one to learn linear algebra first. To do this, I implemented a quantum computer simulator in Javascript that runs in the web browser. Conceptually (in mathematical language), in each simulation I present, I've started by enumerating the computational basis of the Hilbert space (all possible states the qubits could be in) and represented the computational state by putting an arrow beside each of them, which really is a complex number. (This similar to how Feynman explains things in his book QED.) The magnitude of the complex number is the length of the arrow, and its phase is the direction it points (encoded redundantly by its color). I've filled out the amplitude symbol with a square so that at any given point, its probability of a measurement resulting in that outcome is proportional to the area of that square. Essentially, in this language, making a measurement makes the experimenter color blind -- only the relative areas of the amplitudes matter and there is no way to learn directly phase information without doing a different experiment.

I could make a further document explaining along these lines if people are interested. The source is on github too: https://github.com/garrison/jsqis


Strilanc, who works on the google quantum team, has a simulator here: https://algassert.com/quirk



IBM has an amazing tool for this. Not only do they have a great simulator, but you can enter a queue to run your program on their real quantum computer:

https://quantum-computing.ibm.com/


Excellent! Thanks so much for this!


Since no one has listed it yet, please check out https://quantum.country/

It's by Andy Matuschak and Michael Nielsen, and it is excellent. Have fun!


Yes! How come this isn't higher up in the list? This is one of the best pieces of education I've ever seen. Absolutely wonderful.


This deserves to be upvoted all the way to the top!


upvoted


Very simplified explanation:

If you understand Turing Machines, you probably also understand other automata. So you probably understand nondeterministic automata [1].

A quantum computer is like a very restricted nondeterministic automaton, except that the "do several things a once" is implemented in physics. That means just like a NFA can be exponential faster than a DFA, a QC can be exponential faster than a normal computer. But the restriction on QCs makes that a lot harder to do, and so far it only works for some algorithms.

As to why quantum physics allows some kind of nondeterminism: If you look at particles as waves, instead of a single location you get a probability function that tells you "where the particle is". So a particule can be "in several places at once". In the same way a qbit can have "several states at once".

> What I don't understand is how a programmer is supposed to determine the correct solution when their computer is out in some crazy multiverse.

Because one way to explain quantum physics is to say that the waveform can "collapse" [2] and produce a single result, as least as far as the observers are concerned. There are other interpretations of this effect, and this effect is what makes quantum physics counterintuitive and hard to understand.

[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...

[2] https://en.wikipedia.org/wiki/Wave_function_collapse


The thing I don't understand about qc is how on earth can you read values from qbits without breaking superposition.


You can't, it's a fundamental aspect of quantum mechanics (measuring any entangled system collapses it because you've forced the system into a state by measuring it).

The idea is that you structure the QC system such that the computation is done using entangled states, but when it comes to measuring the qubits (to get the result of the computation) the state is such that you'll get meaningful results. This means the quantum state at the end of the calculation would ideally be along whatever axes you're measuring, so you get the same answer 100% of the time.


OK but this implies that you'll have to know beforehand what a result will look like. Which kind of beats the purpose of a general purpose computational device.


No it doesn't. You know that the result of the computation for an individual qubit will be either 0 or 1 (otherwise it would be useless -- measuring only gives you one bit of information), so you construct the system such that after the computation is done each qubit will be aligned with the |+z> or whatever axis. The key point is that you have to be clever about how you construct the system for a given QC algorithm, not that you cannot do arbitrary computations using the system.


OK but we're back on square one. If you can't read info from qbits without breaking the state of the whole freaking system then what exactly is that you reading? Doesn't the alignment info collapses superposition?


You do "break the state of the whole freaking system". Once you've read the output, you're done. You have to set up your initial state again and run the computation from scratch.


You design the algorithm so that it collapses the state into the right result.


You read the 'probability the qbit is one' by running multiple times and doing statistics.

I found an explanation of schor's algorithm by my colleagues quite helpful. In my experience math seems to be more useful here than computing science.


Here is a video of a researcher at Microsoft Quantum lecturing on this: https://www.youtube.com/watch?v=F_Riqjdh2oM "Quantum Computing for Computer Scientists.".

However, even with understanding how a Quantum Computer works at its most basic level I still have difficulty understanding the more useful Quantum Algorithms:

https://en.wikipedia.org/wiki/Shor%27s_algorithm

https://en.wikipedia.org/wiki/Grover%27s_algorithm


I honestly recommend the following: 1. Pick up the standard textbook by Nielsen and Chuang, Quantum computation and quantum information. read the first two to three chapters. 2. Solve the exercises for the Q# programming language.


I found Quantum Computing Without The Physics epicly helpful. https://arxiv.org/abs/1708.03684

It explains in terms a computer scientist can understand. As in: it sets out a computational model and explores it, regardless whether we can physically realize that machine.

Hope this helps!


Not sure if it has helped anyone else, but it did me!


I wrote a basic response, but it got longer than I thought it would and HN complained about it being too long, so here's a pastebin: https://pastebin.com/zTJA4bJh


There’s an O’Reilly book on Programing Quantum Computers that explains it really well. Bonus: it was written by Batman.


IBM's Quantum Experience is great for this. It walks you through the quantum gates you can use and lets you write small programs for quantum processors.

I've found that to be the clearest way of understanding what qcs do.


Pseudo-code for quantum computers is currently linear algebra. Fortunately, most programmers have the required linear algebra to get a thorough understanding of the basics! Check this out https://quantum.country/qcvc. Fair warning, I did have to brush up on my linear algebra a bit, but it's worth it imo. Friends in the know say that when you understand this article, you understand quantum computers.


I had an aha moment with quantum computers a few months ago when reading an article that explained it as probability distributions. I don't think I have the complete understanding in my mind anymore and I wish I had saved the article, but looking into how quantum computers essentially serve as probability distribution crunching machines might help with your understanding.


So can they still do traditional deterministic(?) calculations? Or would that be somewhat akin to using machine learning to do your taxes; possible but just overkill?

I've often heard it said that Quantum Computers can crack cryptographic keys by trying all the possible inputs for a hashing algorithm or something handwavey like that. Are they just spitting out "probable" solutions then? Do you still have to try a handful of the solutions manually just to see which one works?


"Trying all possible solutions" is generally a bad metaphor for quantum computing and will confuse you. (Its more like you start out with all answers being equal probability, and get the wrong answers to somehow cancel each other out making the "right" answer have a high probability)

I am not a quantum person but i once saw a geometric explanation for grover's algorithm which kind of made it all make sense to me. (grover's algorithm is the quantum algo you use for problems where you dont know any approach better than brute force. It can bruteforce stuff in O(sqrt(n)) guesses instead of O(n) like a normal computer). Basically, the geometric version was that you start with all possibilities being of equal probability (i.e. an even superposition of all possible states), negate the amplitude of the correct answer, then reflect the amplitudes around a line that is the mean of the amplitudes (do that sqrt(n)) times. The end result is the correct answer has a higher probability than the other answers. I unfortunately can't find the thing where i originally saw this, but they visualized it basically as a bar graph (of the amplitudes of possible states) and it seemed much clearer to me than other explanations i have come across


Here's what Scott Aaronson says in regard to "trying all the possible inputs": https://news.ycombinator.com/item?id=17425474


Yeah, they can do deterministic calculations. You just avoid ever putting the quibits in a state where measurement gives probabilistic results. it would, however, be a ridiculous use of technology, like using a Neural net to simulate an XOR


Like probability distributions, but they don't just sum when you combine them, they interfere (probability is the square of amplitude, which can be negative).

Quantum computing is all about finding ways to hack the interference process to compute more than you otherwise would have.


you're in luck my friend. Perl has had a quantum computing module since the late 90s:

https://metacpan.org/pod/Quantum::Superpositions

As far as I can tell this one still outperforms all existing "hardware implementations".


The first 'meetup' I went to some 20 years ago was on this module!


Nobody understands quantum mechanics. Lots of people know how to apply it. I don't think quantum technology is going to go anywhere until physicists revamp the theory & the pedagogy in a way that makes it comprehensible.

I say this as someone who passed 2 semesters of graduate QM.


> I say this as someone who passed 2 semesters of graduate QM.

THat's funny because my EE math concentration was on advanced calculus. I took two semesters of a-calc and got A's, but I only know how to compute a Jacobian and apply it, not its origin story. It's a very weird feeling to understand the motions but not the ... depth?


This podcast episode has an amazing explanation by one of the top researchers in the field: https://www.youtube.com/watch?v=uX5t8EivCaM

The basic idea is that by making the amplitudes of the qubits destructively interfere with each other in certain ways, you can eliminate all of the wrong answers to the question you're trying to answer.


not quantum computers per se but I had my aha moment when I saw quantum computing as "linear algebra with probability coefficients". It's mostly working on superpositioned qubits while doing the calculation and then making them (with linear transformations) as likely as possible to collapse on the solution (when "measured").


I looked briefly at QCs and I understood them as kind of a machine that doesn't try every possible solution. It is in every possible solution, and then you can somehow "lock it" in the state that is the solution to your problem. Kind of like NP problems are hard to find a solution but easy to verify a solution.


I think because it has less applications for traditional "software" and more applications for efficiency of embedded systems.

But it could have some bleeding edge new applications from the TCP/IP space for urgent point, new methods for cryptography, or speeding up algorithms for searching. ¯\_(ツ)_/¯


Im not an expert on quantum computers but I'm not aware of any applications in the embedded space.

Generally quantum computers are good for three things

* factoring numbers (and other highly related order-finding problems). RIP RSA, but not that applicable outside of crypto.

* unstructured search (brute forcing a problem in only O(sqrt(n)) gueses instead of an average of n/2 gueses). Certainly useful...but its not a big enough speedup to be earth shattering.

* simulating various quantum systems (so scientists can do experiments easier). Probably by far the most useful/practical application in the near/medium term.

There's not a whole lot else they are good for (that we know of, yet)


Qubit is physical random generator, which quickly oscillates between 0 and 1. It does that so quickly, so scientists must operate with probabilities to perform calculations. They create analog quantum computers, where right solutions are much more probable than wrong ones, then let system to figure them out, then sample solutions periodically.


There is a comic (maybe SMBC) that covers the whole commonly false belief surrounding qubits.


https://www.smbc-comics.com/comic/the-talk-3

The basic gist I get is that quantum computing, for a very specific set of problems, like optimization, let's you search the space more efficiently. With quantum mechanics you can associate computations with positive or negative probability amplitudes. With the right design, you cause multiple paths to incorrect answers to have opposite amplitudes, so that interference causes them to cancel out and not actually happen to begin with. That's just my reading of the comic over and over though.


Along those same lines, I once heard a description that went something like: "Imagine a Jenga tower so precisely balanced that, when it falls over, it spells out the answer."


I found the comic not very good either. Kid suddenly blurts "Wait a minute, that means a qubit corresponds to a unit vector in 2 dimensional Hilbert space" Yeah.


I think you have to take that sarcastically like when she mentions that it's not like the classical probability taught in preschool.


Not quite what I meant. I think the comic is excellent. In case its not painfully obvious, the joke is equating lay people's poor understanding of quantum mechanics to a child's misunderstanding of sex. It's not trying to dumb down the concept of quantum computers at all, it's just trying to point out how incorrect how dumb pop science simplifications of it are. The 'qubits are bits with a range of value between 0 and 1 that use quantum magic to test all possibilities at once' is utter fiction.


Well my point is Gp is asking for good material for explaining the subject, this comic is not it. I found it only irritating Tbh.


Short answer: there isn't an easy answer. Yet. (Give QC another 50 years).

Proof? Just look at all the replies you got: each one is dozens of pages of complex (imaginary) math, control theory, and statistics.

The hardest part of QC is exactly what you described: how to extract the answer. There is no algorithm, per se. You build the system to solve the problem.

This is why QC is not a general purpose strategy: a quantum computer won't run Ubuntu, but it will be one superfast prime factoring coprocessor, for example (or pathfinder, or root solver). You literally have to build an entire machine to solve just one problem, like factoring.

Look at Shor's algorithm: it has a classical algorithm and then a QC "coprocessor" part (think of that like an FPU looking up a transcendental from a ROM: it appears the FPU is computing sin(), but it is not, it is doing a lookup... just an analogy). The entire QC side is custom built just to do this one task:

https://en.wikipedia.org/wiki/Shor%27s_algorithm

In this example he factors 15 into 5x3, and the QC part requires FFTs and Tensor math. Oy!

Like I said, it will take decades for this to become easier to explain.

For fun, look at the gates we're dealing with, like "square root of not": https://en.wikipedia.org/wiki/Quantum_logic_gate


This feels like the high-level missing piece in my understanding of its use. Do you know any resources that expand on QC’s effective potential more from this point of view?


IMO one of the effective potential of a QC is `Secure Encrypted communications`. There is a research project named QUESS (https://en.wikipedia.org/wiki/Quantum_Experiments_at_Space_S...).

This project involves a minisatellite (capable of generating entangled photons in space) to establish a space platform with long-distance satellite and ground quantum channel, and to carry out a series of tests about fundamental quantum principles and protocols in space-based large scale


Sorry, I do not.


All good. I appreciate the perspective you’ve given!


In short I'd say you need to understand the underlying mathematics to intuitively understand the operations that underpinn the algorithms. And since this is quantum mechanics...there's no real ELI5 version that can give you any useful understanding.


Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. But what are the consequences? What is the thought process that central bankers have gone through to make these decisions?

Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)? And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again (just like it's been created)? What happens if a debtor defaults on its debt? Does that money then just stay in the economy, impossible to drain out? What is the general expectation of the central banks? What percentage of the debt is expected to default and how much is expected to be paid back?

And specifically in the case of central banks buying govt. debt: Are central banks considered "easier" creditors than the public? What would happen if a country defaults on a loan given by a central bank? Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?


The consequences are supposed to be inflation. Instead, we seem to only get asset inflation (houses, stocks) and not consumer goods inflation (flour). From a former econ prof who I spoke to recently: the central bankers aren't thinking about inequality or inflation right now, they're just trying to avoid the apocalypse.

The liquidity injected is supposed to be taken out later, thus removing the inflationary distortion. Whether it will or not is anyone's guess. 2008's injections have yet to be taken out.

Central banks are easier creditors because, while autonomous, they are the same country as the government! So it's technically like owing yourself money. A central bank that cooperates with the debtor country (itself) would never force a default, and is thus never an acute problem. Of course, infinite money printing should lead to dangerous inflation.


> the central bankers aren't thinking about inequality or inflation right now, they're just trying to avoid the apocalypse.

The bad news is that by ignoring inequality, they may be just causing it.


> Instead, we seem to only get asset inflation (houses, stocks) and not consumer goods inflation (flour).

That doesn't sound surprising when all that injected money goes directly to banks instead of individuals.


I have done plenty of serious reading on economics but as far as an approachable start, I don't think we can do better than this: https://economixcomix.com/

If you or a friend wants a crash course on econ, check it out.


One approach is reductio ad absurdum:

If Central Banks can create money without negative effects, then

- why tax people?

- why even work? Can't we just print enough money for everyone and live happily ever after?

I realize these questions are quite provocative and their answering only explains if it will work but not how or when it will fail.


> - why tax people?

Printing money is actually more or less equivalent to a tax, because it reduces the value of the existing money supply.

> - Can't we just print enough money for everyone and live happily ever after?

No, because printing money redistributes wealth, it doesn't create it.


Not just any tax, but a wealth tax affecting anybody who has US dollars, regardless of their citizenship or location.


That's a really good point and I never thought about it this way. It just makes me think how powerful is for US the world relying on dollars as an international currency.


Yes! Exactly. This is what I tried to explain in another comment.


The gold standard ended in 1975. Since then, we have "virtual money", which are effectively made out of thin air at the time a bank borrows them to someone. Once they started this system there is probably no way back. But what effects has that done, are them all positive? I don't know. Some say it helps prevent the situation when rich would buy all the gold supply - banks can make more money for others. But - rich people can use stock markets and funds too. And look at one thing - the Appollo program started 1961 and completed 1975. I was quite shocked to see from the documentaries what technology they developed and had during the Apollo era. What US built during cold war. UNIX development started in 1970, right? So many awesome airplanes and technology around. It looks like what we have today is mosty (a great) improvement of many inventions developed before or around the end of Apollo and the end of gold standard. Now, that money is virtual, it looks like there won't be any major stock market crashes as they can intervene relatively easily. But why the space program basically died? NASA has just been talking about SLS for many years... Is it because politicians or bankers don't want that to continue? They could print money for it in minutes, couldn't they? Or would it be far more expensive than we think? Would printing so much money skyrocket the inflaction and bring the whole economy to its knees? Isn't by chance the "real value of work" now much higher than what inflation suggests? I read here about a year ago that some smart people are not working in scientific R&D but instead making AI to improve marketing for online e-shops selling products people don't really need. Despite many startups claiming breakthroughs daily in the news, I don't feel it that way. Those are small pieces one by one. It looks to me big human progress has stagnated somehow. Maybe the smart people are not motivated enough to do the real big things, or maybe it is because there is no need to compete with Russia anymore. But hey, there have been talks about helicopter money recently - giving people money for no reason. Does that look like a great way to improve things? We live in peace - thanks for that - but it seems to me that somehow there is no motivation to do big things anymore. It seems better to just pour the money in the stock market, consume goods and services, watch Netflix and play games. Consume gas and limited Earth resources with virtual unlimited paper money. Taxation probably doesn't make sense anymore, it is there just for historical reasons, from the time bank notes were backed by gold.


I agree this is a good starting point for understanding the financial system! For anyone with a programming background I recommend thinking of the whole financial/economic/political system as an algorithm for distributed decision making: who does what, what physical resources get committed to what projects, etc. You can then start to figure out how each component (money, banks, central banks, government, stock market, derivatives market, limited liability, interest, inflation, etc) serves to guide decisions.

The main purpose of a central bank imo is to keep money creation at arms length from government, so a rubbish government can't fiddle with the financial system too much.


- why tax people?

So that common services (eg. healthcare in Canada, education everywhere, roads) can be collectively paid for.

- why even work? Can't we just print enough money for everyone and live happily ever after?

Because there wouldn't be enough resources. (Fiat) currency is just a medium of exchange for real stuff. Instead of growing wheat and trading it for your wooden furniture, the state provides a medium of exchange so we can both just transact in money. eg. when I don't need more furniture but you still need wheat.

Money =/= wealth


Another thing to realise, along these lines, is that economic theories try to explain the world's economy, not dictate it.

I would say that some mathematicians went into economics to win Nobel prizes (which they did win) and I guess they would probably be quick to point this out at well.


Economics is a branch of moral philosophy and very consciously uses persuasive rhetoric to dictate policy. One of the tools used is hand-waving hidden inside differential equations to produce models that would be useless in any other discipline.

I don't think I've ever seen a mainstream economic prediction that was actually correct.

It's not hard to understand why. Reducing all the sectors of a complex economy to crayon-drawn measures like "inflation" and "unemployment" - which aren't even measured with any consistency - is like trying to predict the weather in the Bay Area using a single weather station for the entire continental US, which is conveniently located on the wall of a cow shed in Kansas.


There are two branches of economics. One of them is a science, the other is (politically, economically, and culturally) successful.


It's possible that they can create money up to a point with positive effects and beyond that point the effects are negative.


America Inc, which dated, by I believe Mary Meeker has one view showing USA on a "GAAP" basis. It's not pretty. Leverage is too high. Profit margins are negative. "growth" (in this case government receipts) underperforms entitlement spend. Interest is some non-material portion of revenue such as 15% to 20% (caveat, even the most highly leveraged private equity bought companies don't spend more than 25% of revenue of amortization and interest, generally speaking).

A better but harder resource is I believe by Piketty (the inequality guy): from balance sheet recessions there's usually only a few ways out. He and his co-authors go through every single knkwn recession in every single country (obviously biased towards recent and Western). What I took away from it is that without population growth the US needs hyperinflation, to default on its debt, or to increase tax revenue's sharply and/or decrease entitlements sharpy. It's up to you to guess which one of those three is most likely.

But the budgetary situation is not tenable.


- Consequences of creating a trillion? Economic policy is largely driven by political necessity. During the Great Depression, FDR's policy was based on Keynesian economics. Keynes aside, FDR tried everything to alleviate the suffering of the people, because the danger was not in implementing the wrong policy, instead the danger was in doing nothing. Later, the "Austrian" school of Monetary theory was popular where it prescribed increasing the money supply to pump up a troubled economy. During the last financial crisis, Obama's administration prescribed "Quantitative Easing" where the government bought the junk to keep the big banks solvent. The big banks were "too big to fail" and had to be rescued. The consensus of the last ten years is that "Quantitative Easing" restored the financial health of the elite and the corporations, but the middle class is still left behind.

- Assets that they are buy? The idea is to keep the banking system solvent and to prevent a domino effect where the liquidation of one big bank will result in a run on other banks. The big banks got into trouble because they took depositors money and invested in junk which went belly-up. The federal government insures everyone's bank deposits; if the banks went bellied up the FDIC would have to pay out. Better that the banks stay solvent.

- There are cases like Greece defaulting on its international loans. The EU forced Greece to agree to an austerity plan lowering Greece's payments if Greece changed its national spending which is deeply unpopular in EU and Greece. But there is no other alternative. Well the alternative is what happened to Weimar Germany after WWI: hyperinflation, economic destruction, and the longing for a savior.


You cannot really "create" money, but they do it anyway because the natural pricing mechanisms (laws of supply and demand) take a non-null time to reach a new equilibrium.

So the aim is to take advantage of this transient fluctuation and the way the ripple propagates.

But even if in a perfect (or future) world, everybody reprices instantly all the goods relatively to the new amount of available currency, there is still an effect which is how is distributed the newly "created" currency: who get the new shiny coins? So this is equivalent (if the repricing is done) to a kind of global instantaneous tax-and-subsidies. They tax everybody by the percentage of currency created (relative to the total existing amount), and the lucky ones receiving the fresh money are getting thus a subsidy.


I've wondered the same thing lately whenever someone here posits that defaults cause the destruction of money.

I'd love to see this properly explained, because it definitely has a counter intuitive ring to me.


It works like this: imagine we create some new Bank. Customer A deposits his life savings of 1 million hackerbucks.

Now our Bank loans out 50,000 of those hackerbucks to Customer B. It does this by crediting her account with 50,000 hackerbucks, but notice that Customer A still has 1 million in his account - so now there's 1,050,000 hackerbucks in apparent existence - we've created 50,000 hackerbucks from thin air. If Customer B withdraws the loan money to go spend it, the Bank will have 950,000 in reserves and an asset worth 50,000 (the loan). Customer B will have 50,000 in cash.

What we've actually done is increase the "M2", one of the measures of how much money is in the economy.

If Customer B either repays the loan or defaults on it, that new money disappears. In the loan repayment case, the Bank goes back to having 1,000,000 in reserves, and in the loan default case the loan asset becomes worthless and it is left with only 950,000 in reserves (the other 50,000 is out there with wherever Customer B spent it).


Thank you for clarifying.

So the bank's speculative asset loses value: I struggle to see this as money being destroyed as it wasn't actually money, it was an asset with a price attached to it which has now changed. In contrast to money sat in your bank account, the price was never redeemable (you couldn't go spend it on beer) unless you used that asset to get the debtor to pay you back (or convinced someone else it was worth buying from you as a speculative asset). You might as well say money is destroyed when share prices tumble. Maybe this is the point of such arguments, to make the case that money is no different to any other asset, but we don't tend to treat it like that in reality. Or do we?


It's money in the same sense as the money that was created by the loan in the first place is money - it's not physical currency, but the insight is that the money created by financial means like this bids up prices in the same way as literally minting extra physical currency and distributing it would.

When that loan asset is written down the bank has to make up the difference from its equity - this ends up reducing the amount of loans it can write, so you get a contraction in the monetary supply.

So in the end, I guess the shorter answer is that a default destroys money in the same way that writing a loan creates it - you might well complain that no actual currency has been created or destroyed, but the argument is that it has a similar overall effect.


If Customer B repays the loan, the new money disappears but the money made in interest stays. Eventually does a bank get to a point where it has enough real money, that it can lend it out instead of increasing the money supply?


Whoever defaulted spent the money on something else, so it's still in circulation somewhere, just not in a form the original creditor can get hold of.


That would be my intuition too. But if you go down almost any "this crisis won't cause inflation because..." rabbit hole on hacker news, you should see multiple unopposed claims that defaults lead to deflation through the destruction of money.

I'm pretty ignorant in this field, and usually I've been a day or so behind the posts (missing the window to press for more information), but I feel like there's definitely some contention there.


Here's how I would think of that:

Suppose I'm a bank, and I lend you $10 to buy apple tree seedlings. You spend all $10 on seedlings as promised.

The person who sold you the seedlings has $10. You have the seedlings. I have an expectation of getting $10 in the future, presumably from your sales of apples.

Because most people repay their loans, I'm confident I'll get the $10 back, and being a bank, my business is lending money. I might treat the $10 loan as $7 on my balance sheet when I decide how much money is safe to lend out.

Then the price of apples crashes. You come to me and say, 'look, there's no way I'll make $10 selling apples in the time I promised to repay you. Best I can do is deliver you the seedlings or sell them to my neighbor for $3 and give you that'. I grumble a little, but take your deal.

The person you bought the seedlings from still has $10. Your neighbor now has the seedlings and $3 less. I now have 3 real dollars instead of 7 hypothetical dollars. In other words, 4 hypothetical dollars disappeared. When I decide how much to lend out, I'll be basing that on $3 I know I have, instead of the $10 I thought I'd probably get back. I don't lend as much money to aspiring orchardists (orchardeers?), and the price of apples rises.

Edit: This fragility is probably a major factor why some people are so against fractional reserve banking (my counting hypothetical dollars as having value) but without that hack, there's no saying I could have lent you the original $10, so it's a bit of a double-edged sword.


So in this case the business did not create as much wealth as intended (it produced apples which turned out not to be needed as much they had originally planned).

The default is a side effect of that outcome, not its cause.


Could you not see the $4 disappearing on the GP scenario?


I can see that the bank ended up with $4 less than it planned to, but I don't see that as money being destroyed. It happened because the original estimate of hypothetical dollars was wrong. (Also if the hypothetical dollars are "money" then you're double counting it: creating $10 in circulation has required creating $17 overall which strikes me as poor notation to say the least). (Also if all had gone according to plan, the extra $4 would have come from the pockets of people buying apples. That $4 is still in circulation, either in the same pockets or it got spent elsewhere).

Suppose I buy a painting. I believe it to be an original Van Gogh so I pay $10 million for it. I then find out it is fake, and worthless. Was $10 million (of money) destroyed? Of course not, I just mis-valued an asset. Suppose it then turns out to be real after all. Owing to the fascinating history of this painting it is now valued at $20 million. Was $10 million of money created (relative to the moment when I originally thought it was a Van Gogh)? No. Was $10 million of wealth created? Yes as the world now has one more thing worth $10 million in it.

Money != wealth, even in the materialist sense where wealth consists purely of goods and services. Money is a metric we use to keep track of wealth, and in general it's considered helpful if that relationship holds, so if we're trying to maintain that relation rigorously the central bank should print another $10 million (or create it by making loans) to reflect our knowledge and appreciation of the Van Gogh - if it doesn't then the existing fixed quantity of money in the system will now be representing a greater quantity of wealth, causing deflation.

As I said in my other post I am not an economist by training. If the economists want to call this thing that got created/destroyed here "money" then I guess I should let them, but I would like to hear a good reason why it makes sense to do so, and I haven't heard one. Absent of a good reason I might as well call it haddock. Or, considering the OP was asking which things that could be explained better, we could acknowledge what any good programmer knows: part of a good explanation is choosing the right names for things.


As I think of it, credit (what got destroyed) and currency (what was used to create the credit) are interchangeable - $10 of credit buys as many seedlings as $10 of currency, hence we think of them as one 'thing' - I might call 'money' the category including both, though I'm no expert in economics either and these might be nothing like the official jargon. But what does seem reliable is that destroying $4 of credit has the same effect on the price of apples as would destroying $4 of currency, though the latter is a more rare event.


The difference is that it's the liquidation process from defaults that causes deflation, not the defaulting itself. Without the liquidation process, if you assume defaulting had no consequences, that's indeed inflation - as everyone is allowed to create money without consequence.


Thanks. I think this point has made it a lot clearer.

So sure, the loaned money might still be in the system in some naive sense, but value has been destroyed in the asset price? Suddenly a lot less money buys a lot more asset and that's where we find the deflation.

If I borrow 1M for an asset in good times and can't pay it back, the creditor gets the asset and probably gets a good portion of that 1M back. If that same scenario plays out in bad times and my whole street defaults on the same asset at once, there's a resulting fire sale and far more value is destroyed (including being wiped off neighbouring, non-creditor-owned assets of the same type) than money added by leaving the loan sloshing around somewhere else in the economy.


Sure, and that's just like a net effect of a gift from the creditor to the debtor, which doesn't have any effect on the money supply.


PhD student here. Not expert on all those questions but:

> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. What is the thought process that central bankers have gone through to make these decisions?

The general consensus is that central banks should stay passive and keep prices stable. However, in periods of crisis, like the one we live in, the central bank could support economy. In ordinary times, creating trillions would lead to inflation. But here the idea is more to save the economy in the short term because it's always cheaper than reparing it. Central bankers agreed to create trillions such that banks do not go bankrupt like they did in 1929. By creating trillions, they also keep interest rates low for government such that they can still borrow.

> But what are the consequences?

Some inflation. Another consequence is that investors will invest in riskier assets afterward to keep their profitability target. (Again, because lending trillions will lower the interest rates)

> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..)?

They usually buy low risk, higly liquid assets. Putting trillions in startups is infinitely infinitely complicated for a central bank because it implies high monitoring costs, and it also takes a lot of time to create those kind of contracts. Remind that the goal is to provide lot of liquidity to the economy as fast as possible. There is also an academic debate about giving money directly to the general public (known as "helicopter money"), but with little attention from central bankers.

> And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again ?

Yep, pretty much... Appart from fiat money, money is constantly created and destroyed. It is mostly created by private banks when they grant loans. And it is destroyed when you repay it. Of course they cannot do whatever they like and create at will, but remember that "Deposits DO NOT make the credits (but in some way it defines how much you could create)"

> What percentage of the debt is expected to default and how much is expected to be paid back? What would happen if a country defaults on a loan given by a central bank?

Central banks buy bonds, and bonds are pretty much always paid back. And if not, the central bank will not suffer much. Cases of countries not reimbursing are very scarce and exceptional (I can just think of Argentina). Anyway, a country CANNOT go bankrupt like a person. And in general, comparing countries with individuals or companies is not a good idea. Countries are here pretty much forever (in a financial sense), you don't. Countries can waive taxes, individuals cannot.


>> And then, what happens if a debtor pays back its debt? Is that money consequently getting "erased" again ?

>Yep, pretty much...

I would add that if the system is fractional reserve then it increases the proportion of the bank's reserve allowing more money to be created. So while it's technically true that it's destroyed you could see the next loan as its reincarnation, no..?

I didn't go here in my response above because my vague understanding is that we're not strictly a fractional reserve system any more, though I don't understand how.


There's also a concept of "capital deepening". Basically, while vague, when your monetary supply growth outpaces GDP, which is not hard to do with low single digit GDP growth you have more capital available for "one unit of GDP". Therefore asset prices go up.

At least for non private companies ...asset prices go up....auctions clear at values with maximum leverage...recession....monetary stimulus....repeat.


My thoughts on this Modern Monetary Theory is that central banks are just creating these trillions, but they are doing it at a relative rate to each other.

I think eventually it will have bigger consequences, but it will take some time for these trillions to filter on down.

I remember once outcome of the 2008 crisis was that consumer goods like cereal boxes stayed the same size, but the bag inside the box got smaller.


Not sure I'd class it as a science, but here's my take... though I'm no expert and certainly agree this stuff could be better explained!

> Macroeconomics. Central banks are "creating" a trillion here, a trillion there, like nobody's business. But what are the consequences?

Nobody quite knows; it's still hotly contested between left wing lovers of Keynes and right wing believers in austerity.

> What is the thought process that central bankers have gone through to make these decisions?

Probably largely a political one. Central banks may be trying to fulfill a remit set by law (e.g. bank of england: keep inflation below x%) and are trying to deliver on that. (why? too much or too little inflation both cause problems, I guess we somehow reached consensus on a "sane" amount that keeps pace with genuine growth of wealth within the economy)

> Also why, exactly, are they buying the exact assets that they are buying (govt. debt, high-yield bonds, etc..) and why not others (e.g. stocks or put money into startups)?

I think this is about distributed decision making. The central bank does not have the expertise to decide which stocks or startups represent the best investments. The examples here involve lending money to government, presumably the idea being the latter is better placed to decide what to do with the money. Another example is buying assets from other banks, which are again better placed to decide which businesses/homeowners/etc represent a more sound investment as they do it on a daily basis (from a profit/loss point of view ... of course we debate whether or not that's the case on a societal level).

> What would happen if a country defaults on a loan given by a central bank?

Internally it would depend on laws/balance of political power within the country. Between countries, depending on the currency the country could do crazy stuff like print excessive amounts of money to repay the loan (Germany did this in the 1930s leading to hyperinflation) or they could just as you say, default. The country's credit rating would then be downgraded, making it harder for them to raise credit in future.

> Would the central bank then go ahead and seize and liquidate assets of the country under a bankruptcy procedure to pay off the debt (like it would be standard procedure for individuals and companies)?

Not the bank, but the country making the loan, may first negotiate some debt relief with strings attached e.g. preferential trade agreements. Beyond that, I have no idea what precedent exists.


>Nobody quite knows

A lot of these things are kind on unknowable because they depend on future human behaviour in ways you can't really predict. A lot of George Soros's theory of reflexivity is along those lines. People think they are calculating on the basis of fundamentals but the things that look like fundamentals are actually functions of human behaviour so the system is inherently unstable. He's made a few bob from that.


I recommend the Planet Money podcast by NPR.


I would like to understand how cellular biology processes actually work. Like, how do all the right modules and proteins line up in the right orientation every time? Every time I watch animations, it seems like the proteins and such just magically appear when needed and disappear when not needed [0]. Sometimes it's an ultra-complex looking protein and it just magically flys over to the DNA, attaches to the correct spot, does it's thing, detaches, and flies away. Yeah right! As if the protein is being flown by a pilot. How does it really work?

[0] https://youtu.be/5VefaI0LrgE


They don't. This is a pet-peeve of mine, and it's reinforced by animation after animation.

Everything is being jostled around randomly. The molecules don't have brains or seeker warheads. They can't "decide" to home in on a target.

The only mechanisms for guidance are: diffusion due to concentration gradients, movement of charged molecules due to electric fields, and molecules actually grabbing other molecules.

It's all probabilities. This conformation makes it more likely that this thing will stick to this other thing. You may have heard that genes can be turned on or off. How? DNA is literally wound on molecular spools in your cell nuclei. When the DNA is loosely wound other molecules can bump into it and transcribe it -- the gene is ON. When the DNA is tightly spooled, other molecules can't get in there and the gene is OFF for transcription. There's no binary switch, just likelihoods.

Everything is probabilistic, but the probabilities have been tuned by evolution through natural selection to deliver a system that works well enough.


Even diffusion isn't some magical force guiding chemicals through the medium. It's just random movement that statistically results in the chemical being spread out. This is the same principle that the 2nd law of thermodynamics is based upon. There's nothing magic to it, it's just the statistically likely end result over many particles.


Yes. It's interesting how powerful and clarifying this model of its-all-just-atoms-bumping-into-atoms is. It's interesting how many people take science courses, but don't really get this.

In the context of Covid19, I see so many people wearing PPE, but failing to act as though they understand that the actual goal is to prevent this tiny virion dust from entering your orifices. Like wearing gloves and a mask, but then picking up unclean item in store then using now unclean gloves to adjust mask and make it unclean.

People seem to think of things as having essences or talismanic effects. Like gloves give you +2 against covid and a mask gives you +5 when it's really all about preventing those virus things from bumping into your cell things.


People understand 'germs' we don't live in a magical culture. It's not that they don't understand contamination they just haven't thought far enough ahead when they adjust their mask.


Masks are for keeping your own particles from spreading far, not the other way around.


> Masks are for keeping your own particles from spreading far, not the other way around.

Masks are for keeping your own particles from spreading far AND for lowering the probability of virions found in the environment from entering your respiratory system.

Masks lower the probability when all other variables are held constant. If someone thinks wearing a mask grants invincibility and in turn chooses to increase their exposure to high viral load individuals or environments, they're putting themselves at risk.


> Masks are for keeping your own particles from spreading far AND for lowering the probability of virions found in the environment from entering your respiratory system.

Both of you may be correct. I think the person you responded to may not have been precise in their framing.

I suspect that you had N95 masks in mind when you wrote masks, which doesn’t negate the point of the person you responded to, if they had surgical masks in mind when they wrote masks. Surgical masks are far more common than N95 masks since they are cheaper and do not provide protection against viral particles for the wearer.


Surgical masks do provide some level of protection against virus droplets and aerosol for the wearer they just are not as effective as N95. Even a teacloth or a scarf wrapped around your face will provide some level of protection to the wearer from virus particles entering their mucus membranes.


As stated, this is not the whole truth. Please stop spreading this myth. This particular myth may actually cost lives.

https://smartairfilters.com/en/blog/n95-mask-surgical-preven... https://smartairfilters.com/en/blog/coronavirus-pollution-ma...


Sorry, my comment was not very clear and is prone to misinterpretation. I'm not saying masks don't keep infection out, but rather that the point of society-wide mask adoption is more to keep unwitting spreaders from spreading so widely. I mean it does both, but as I understand it, it's main value is to attenuate sources than vice versa.

I'm in Taiwan where masks are ubiquitous, and have been upset reading about the slow adoption of masks in the West because it was always from a selfish perspective ("do masks protect ME?") whereas here they're worn for a communal purpose ("how do I protect others?"). How effective they are at blocking incoming infection always seemed like a big distraction to me, since it's been clear from the start that it reduces spray from spreaders talking and coughing, which alone is enough of a reason to adopt it widely.


Man, you and the other what-are-fields post just started me thinking about whether diffusion and fields are just things bumping into things. I know that at the QFT level things like the classical E-field can be expressed as interchange of mediator particles. But then QFT says it's all fields. Hmm...


QFT says it's all fields because it is. Particles simply cannot explain the conjunction of quantum mechanics with special relativity.


I am not so sure about that. When you imagine a "particle", what do you see? Do you see a collection of balls?


How do you mean?

To clarify: a "point particle" is an object with no internal structure, that is, it can be fully described by its coordinates wrt time (ignoring relativity for now). This is a concept, a model which explains many phenomena, a model on top of which you can build many theories. It does not, however, explain the conjunction of QM with special relativity.


It would be great if they showed just one animation up front of the chaotic mess that actually represents reality. They could then show the simplified version so that we can actually see what is going on.


https://www.researchgate.net/profile/Nicolas_Bellora/publica... is one example of the chaotic mess. What that shows is many RNA polymerase molecules walking up a gene. The horizontal line across the middle is DNA. The vertical tails hanging off it are RNA being built as the DNA is transcribed.

What that image drove home for me is:

1) that DNA transcription isn't something that happens rarely, or once-at-a-time. DNA is constantly being transcribed; proteins are constantly being built. The scale and rate isn't something I'd ever been taught.

2) How RNA polymerase works must be taking into account a hell of a lot of congestion. Polymerase molecules must constantly be bumping into each other.

3) How the picture would make no sense whatsoever unless you already know what the mechanism is.

I think it does make sense to start with the idealised process, as long as you follow up with messy reality.


The best programmer analogy I can think of is: imagine a system where every instruction always runs concurrently and every output influences everything with varying probabilities.


I once saw a video that purported to showed the jittering for some simple chemical reaction, it was indeed very enlightening.


It's not so much "magic" as it is the sheer rate of molecular collisions in the cytosol. There are so many collisions happening that at least one of them will do what you want. Here's a back-of-the-napkin example, admittedly with many simplifications:

A tRNA molecule at body temperature travels at roughly 10 m/s. Assuming a point-sized tRNA and stationary ribosome of radius 125 * 10^-10 m, the ray casted by the moving tRNA will collide with the ribosome when their centers are within 125 * 10^-10 m of each other. The path of the tRNA sweeps a "collidable" circle of the radius of 125 * 10^-10 m, for a cross-sectional area of 5 * 10^-16 m^2. Multiplied by the tRNA velocity, the tRNA sweeps a volume of 5 * 10^-15 m^3 per second. Constrained inside an ordinary animal cell of volume 10^-15 m^3, the tRNA would have swept the entire volume of the cell five times over in a single second. Obviously the collision path would have significant self-overlap, but at this rate it's quite likely for the two to collide at least once any given second.

Now, consider that this analysis was only for a single ribosome/tRNA pair. A single ribosome will experience this collision rate multiplied by the total number of tRNA in the cell, on the order of thousands to millions. If a ribosome is bombarded by tens of thousands of tRNA in a single second, it's very likely one of those tRNA will (1) be charged with an amino acid, (2) be the correct tRNA for the current 3-nucleotide sequence, and (3) collide specifically with the binding site on the ribosome in the correct orientation. In actuality, a ribosome synthesizes a protein at a rate of ~10 amino acid residues per second.

Any given molecule in the cell will experience millions to billions of collisions per second. The fact that molecules move so fast relative to their size is what allows these reactions to happen on reasonable timescales.


I'd love to see a form of physical analysis like this extended to a statistical analysis of the likelihood of abiogenesis.

I know 4 billion years is a long time and the earth has a lot of matter rattling on it at any given time, but if every atom in the universe was a computer cranking out a trillion characters per second, you'd only have a 1 in a quarter quadrillion chance of making it to 'a new nation' in the first sentence of the Gettysburg address. Seeing the complexity in even the most trivial biological system just makes me scratch my head and wonder how its possible at all.

I'm not invoking God here. I just see a huge gulf in complexity that is difficult for me to traverse mentally.


Fantastic answer. I don't know what I expected, but I find ~10 amino acid residues a second to be somewhat low.


The issue with these animations is that they're getting rid of all the thermal noise. In reality, single proteins are flying around the whole length of the cell many times a second, just from their thermal motion. And when processes like DNA transcription happen, they're not like a regular conveyor belt -- a fraction of the time the machine will even accidentally run steps in reverse! However, if any of this were shown, the animations would become impossible to understand.


Yes to getting rid of thermal noise. No ish? to single proteins flying around the cell that fast. The cytosol is incredibly jam-packed and things are getting hung up on other things so we'd expect the mean free path to actually be quite small for the larger biomolecules.


just once i would like to see the realistic animation though even if it's impossible to understand


https://www.youtube.com/watch?time_continue=42&v=uHeTQLNFTgU

This comes close -- It shows the jittery thermal motion of this tiny machinery, instead of nice smooth glides.


this segment is not the worst, but the full version of inner life of the cell is terrible. Because they cheated, by reversing highly symmetrical processes, for example:

https://www.youtube.com/watch?v=B_zD3NxSsD8&t=3m17s

The artistic director has a ted talk where he talks about how beautiful biological processes are, and it's like no, man, you made it look that way.

If you want a really fantastic video that captures just how messy and random it is I recommend the wehi videos, like the one on apoptosis, where the proteins look way more derpy than the secret life of the cell: https://www.youtube.com/watch?v=DR80Huxp4y8 There's a couple of places where they have a hexameric protein where things magically snap into place, but I give them a pass because the kinetics on that are atrociously slow. Let's just say for the sake of a short video the cameraman happened to be at the right place at the right time.


Oh my that facepalm dreadful. Thank you! That gives me a new high-water mark for misleading biomolecular visualization computer graphics content. Snagged a copy.

When most everything is unmoving, it's "obvious"... well no, not to students, but... there's no pretense of doing anything other than stitching together an extremely selective set of "snapshots", to tell a completely bogus narrative of smooth motion.

Here it seems something like a Maya "jiggle all the things" option has been turned on. Making it sort of kind of look like you're being shown more realistic motion. But you're so not. It's the same bogus smooth narrative, now with a bit of utterly bogus jiggle. Those kinesin legs still aren't flailing around randomly. Nor only probabilistically making forward progress. And the thing it's towing still isn't randomly exploring the entire bloody space it can reach given the tether, between each and every "step". It still looks like a donkey towing a barge, rather than frog clinging to rope holding a balloon in a hurricane.

And given that the big vacuole or whatever should be flailing at the timescale defined by the kinesin feet, consider all those many much smaller proteins scattered about, just hanging out, in place, with a tiny bit of jiggle. Wow - you can't even rationalize that as being selective in "snapshots" - those proteins should just be blurs and gone.

And that's just the bogosity of motions, there's also... Oh well.

So compared with older renders, these new jiggles made it even harder to recognize that all the motion shown is bogus. And not satisfied with the old bogus motion, we've added even more. Which I suggest is dreadful from the standpoint of creating and reinforcing widespread student misconceptions. Sigh.


you might like this render better:

https://www.youtube.com/watch?v=DR80Huxp4y8

here's the artistic director for the inner life of the cell (the worse one) going on and on about how "beautiful" the science of biology is:

https://www.ted.com/talks/david_bolinsky_visualizing_the_won...


> artistic

Yeah. One might for example reduce reinforcement of the big-empty-cell misconception by briefly showing more realistically dense packing, eg [1], before fading out most of it to what can be easily rendered and seen. But that would be less "pretty". Prioritizing "pretty" over learning outcomes... is perhaps a suboptimal for education content.

> better

But still painful. Consider those quiet molecules in proteins, compared with surrounding motion. A metal nanoparticle might be that rigid, but not a protein.

One widespread issue with educational graphics, is mixing aspects done with great care for correctness, with aspects that are artistic license and utter bogosity. Where the student or viewer has no idea which aspects are which. "Just take away the learning objectives, and forget the rest" doesn't happen. More like "you are now unsalvageably soaked in a stew of misconceptions, toxic to transferable understanding and intuition - too bad, so sad".

So in what ways can samplings of a protein's configuration space be shown? And how can the surround and dynamics be shown, to avoid misrepresenting that sampling by implication?

It can be fun to picture what better might look like. After an expertise-and-resource intensive iterative process of "ok, what misconceptions will this cause? What can we show to inoculate against them? Repeat...". Perhaps implausibly intensive. I don't know of any group with that focus.

[1] https://www.flickr.com/photos/argonne/8592248739


david goodsell's pictures are fantastic. I used to work down the hall from him!


Agreed; cool, seems a neat guy. And much of his work is CC-BY, thus great for open education content. Hmm, the Wikimedia Commons capture of his work seems to be missing quite a bit. Oh nifty, there's now an interactive version of his 2014 "Molecular Machinery: A Tour of the PDB".[1]

[1] https://cdn.rcsb.org/pdb101/molecular-machinery/ [] http://pdb101.rcsb.org/sci-art/goodsell-gallery [] http://pdb101.rcsb.org/motm/motm-by-date [] https://cdn.rcsb.org/pdb101/molecular-machinery/


At least there is some water there. But what strange force is that holding proteins together when they are completely out of alignment, and keeping the water away from everything else?


Well, OP did say "even if it's impossible to understand" so if it is in fact in any way misleading, then my lawyers assure me that I may claim the full privileges of a contextual get-out-of-jail-free card for linking to it, and am hereby fully absolved of any intellectual harm caused to any and all individuals who may have viewed it.


Ha. I've wondered if increasing embarrassment might reduce long-term stable misconceptions in education content. Like astronomy texts getting the color of the Sun wrong. Or wing lift discussed elsewhere. But making textbooks liable for intellectual harm... wow. What might the internet, media, politics, thought and conversation look like, if we were all liable for negligent intellectual harm?


It would be pretty boring, proteins bouncing around randomly and occasionally honking up, substrates flying around like rifle bullets sometimes hitting the target, and everything smooshing around in random directions. If you’ve seen Brownian motion, you’ve seen what is happening to all the molecules but at 1/100 the length scale. Nothing stays put. Everything is moving fast and far on the scale of proteins and small molecules.


Fast, yes. Far, well, the mean free path in a cell is very short.



Any time something "magically lines up", it means that those molecules randomly float around until the right ones bump into each other.

Once they are in close enough proximity to bump into each other, intermolecular forces can come into play to get the "docking process" done.

For something like transcription, once they are "docked", think of it like a molecular machine - the process by which the polymerase moves down the strands is non-random.

There are also several ways to move things around in a more coordinated fashion. Often you have gradients of ion concentration, and molecules that want to move a certain direction within that gradient. You also have microtubules and molecular machinery that moves along them to ferry things to where they need to be. You can also just ensure a high concentration of some molecule in a specific place by building it there.


Float is the wrong word to use I think. Float implies gravity and water. At the scale of a cell gravity is not as important as intra-molecular forces like van-der-waals forces, and fluids do not behave like we think.


A friend of mine showed me this writeup when I asked a similar question, and it helps to clear up a lot of the "magic" movement:

http://www.righto.com/2011/07/cells-are-very-fast-and-crowde...

But in a nutshell, the animations are heavily idealized, showing the process when it succeeds, slowing it way, way down, and totally ignoring 90% of the other nearby material so you can see what's going on. Then you remember that you have just a bajillion of cells within you, all containing this incredibly complex machinery and... it's really kindof humbling just how little we actually know about any of it. Not to discredit the biologists and scientists for whom this is their life's work; we've made incredible amounts of progress over the last century. It's just... we're peeking at molecular machinery that is so very small, and moves so quickly that it's nigh impossible to observe in realtime.


A few different things help everything work:

1) Compartmentalizing of biological functions. Its why a cell is a fundamental unit of life, and why organelles enable more complex life. Things are physically in closer proximity and in higher concentrations where needed.

2) Multienzyme complexes. Multiple reactions in a pathway have their catalysts physically colocated to allow efficient passing of intermediate compounds from one step to the next.

https://www.tuscany-diet.net/2019/08/16/multienzyme-complexe...

3) Random chance. Stuff jiggles around and bumps into other stuff. Up until a point, higher temperature mean more bumping around meaning these reactions happen faster, and the more opportunities you can have for these components fly together in the right orientation, the more life stuff can happen more quicky. There's a reason the bread dough that apparently everyone is making now will rise faster after yeast is added if the dough is left at room temp versus allowed to do a cold rinse in the fridge. There are just less opportunities for things to fly together the right way at a lower temperature.

3a) For the ultra complex protein binding to the DNA, how those often work in reality is that they bind sort of randomly and scan along the dna for a bit until they find what they're looking or fall off. Other proteins sometimes interact with other proteins that are bound to the DNA first which act as recruiters telling the protein where to land.


The common theme there is constrained proximity. To give random chance more of a chance.

My favorite illustration was a video of simulated icosahedral viral capsid assembly. The triangular panels were tethered together to keep them slamming into each other. Even then, the randomness and struggle was visceral. Lots of hopeless slamming; tragic almost but failing to catch; being smashed apart again; misassembling. It was clear that without the tethers forcing proximity, there'd be no chance of successful assembly.

Nice video... it's on someone's disk somewhere, but seemingly not on the web. The usual. :/

> yeast

Nice example. For a temperature/jiggle story, I usually pair refrigerating food to slow the bacterial jiggle of life, with heating food to jiggle apart their protein origami string machines of life. With video like https://www.youtube.com/watch?v=k4qVs9cNF24 .

> Compartmentalizing

I've been told the upcoming new edition of "Physical Biology of the Cell" will have better coverage of compartmentalization. So there's at least some hope for near-term increasing emphasis in introductory content.


Coincidentally I'm previewing PBotC just now. It looks really promising. Do you know roughly when the new edition is expected? Or if you have any favorite books on how things work at that scale, I'd be grateful for the pointer. (I've read a popular book by David Goodsell and am halfway through a somewhat deeper one.)


> PBotC [...] when the new edition

No idea, sorry.

> favorite books on how things work at that scale

I've found the bionumbers database[1] very helpful. Google scholar and sci-hub for primary and secondary literature. But books... I'd welcome suggestions. I'm afraid I mostly look at related books to be inspired by things taught badly.

The bionumbers folks did a "Cell Biology by the Numbers" book... the draft is online[2].

Ha, they've done a Covid-19 by the numbers flyer[3].

If you ever encounter something nice -- paper, video, text, or whatever, or even discussion of what that might look like -- I'd love to hear of it. Sorry I can't be of more help.

[1] https://bionumbers.hms.harvard.edu/search.aspx [2] http://book.bionumbers.org/ [3] http://book.bionumbers.org/wp-content/uploads/2020/04/SARS-C...


Thanks! I guess I'll try the bionumbers book first.

I'll keep you in mind, too.


I studied bioinformatics and found the standard textbook, Albert's "Molecular Biology of the Cell"[0] to be one of the most captivating books I've read. It's like those extremely detailed owners' manuals for early computers, except for cells.

The amount of complexity is just absolutely insane. My favourite example: DNA is read in triplets. So, for example, "CAG" adds one Glutamine to the protein it's building[1].

There are bacteria that have optimised their DNA in such a way that you can start at a one-letter offset, and it encodes a second, completely different, but still functional protein.

I found the single cell to be the most interesting subject. But of course it's a wild ride from top to bottom. The distance from brain to leg is too long, for example, to accurately control motion from "central command". That's why you have rhythm generators in your spine that are modulated from up high (and also by feedback).

Every human sensory organ activates logarithmically: Your eye works with sunlight (half a billion photons/sec) but can detect a single photon. If you manage to build a light sensor with those specs, you'll get a Nobel Prize and probably half of Apple...

[0]: https://amzn.to/2zzDt8P

[1]: https://en.wikipedia.org/wiki/DNA_codon_table


"The distance from brain to leg is too long, for example, to accurately control motion from "central command"

As a dancer, I have been fascinated by that fact. It means that dancers do not dance to the beat as they hear it - it takes too much time for the sound to be transformed by the ear/brain into an electrical pulse that reaches your leg. Instead, all dancers have a mental model of the music they dance to that is learnt by practice/repetition.

Dancing is just syncronizing that mental model to the actual rhythm that is heard. When I explained that to a bellydancer friend she finally understood the switch that she had made from being a beginning dancer to an experienced dancer who 'dances in their head'


You can clap your hands to a calibrated delay from the previous beat that you heard (predicting the next beat before you hear it). This is analogous to the principle of a phase-locked loop, which gradually adjusts an internal oscillator until it matches an external frequency. That internal oscillator can emit a beat just before the real one, offset just enough to cancel all the delays in the processing path.

This only works if the beat you're hearing is sufficiently stable.


Yeah, you often send commands several beats in advance. And then there's some lag too, because muscles are fairly viscous and take a bit of time to start up. You're basically dancing in the future, because you are behind. I think we just run pre-baked programs (from a lot of practice) and adjust their timings on the fly every few beats or a bar.


I guess the same must apply to a soccer player, except instead the mental model is about the trajectory of the ball.


The Albert's MBoC is pretty much known as the reference textbook where I studied.

Note that the 4th edition is (sortof) freely available at the NIH website. The way to navigate through that book is bizarre though, as the only way to access its content is by searching.

https://www.ncbi.nlm.nih.gov/books/NBK21054/


Cells are tiny and the speed of sound is how fast air molecules move. Proteins are also not bouncing around as fast but it’s very still quick relative to their size. Next, often there are multiple copies of each component. That’s half the story, larger cells also have various means to clump things together to improve the odds. https://en.wikipedia.org/wiki/Endoplasmic_reticulum

PS: Speed of sound is 343 m/s, diameter of a cell nucleus is ~ 0.000006m to give an idea.


Speed of sound in water is faster.


Yep, and speed of sound is lower than the average speed of individual molecules. But, I was aiming for an intuitive understanding rather than accuracy involving brownian motion etc.


From a physics perspective I bet you have two things happening:

1. These molecules are moving around a lot. The kinetic energy of molecules at room or body temperature gives them impressive velocity relative to their scale, and they're also rotating altogether and internally.

2. Compatible molecules are like magnetic keys and locks. They attract each other and the forces align with meeting points. The same way that proteins fold spontaneously.

So the remaining part is getting concentrations appropriate for what you want to happen - and that's a matter of signaling molecules and "automatic" cell responses to changes in equilibrium. It's a really chaotic system and it's a wonder it works at all.

I imagine that's also one reason life is imprecise, i.e. no two individuals are alike even with identical genes. There's a lot of extra "entropy" introduced by that mess of a soup.


there are some animations that show how fast molecules and proteins go around in a cell, it's basically a bunch of extremely fast collisions and interactions going at random that end up falling into proper configurations. The way Science is taught in molecular biology (as in, visually, with proteins binding to receptors just like if it were fate) is usually completely wrong.


I recently started taking insulin. Check out the molecular structure for that. It blows me away how complex it is.


By molecular structure alone Insulin is one of the simplest proteins, even (though it's complex in ways you don't see by looking at a static picture of it, lifecycle, oligomeric interactions)


Compared to something that isn't a protein, it's pretty complex.

It's like how the source code to `ls` is simple because it's one of the most basic Unix programs, or something like that.


I really like this video as it shows diffusing proteins at a realistic concentration: https://www.youtube.com/watch?v=VdmbpAo9JR4


I find most explanations of the Equivalence Principle that lies at the foundation of General Relativity to be very lax.

To wit, the idea is that you cannot distinguish whether you are in an accelerated frame or in a gravitational field; alternatively stated, if you’re floating around in an elevator you don’t know whether you’re freefalling to your doom or in deep sideral space far from any gravitational source (though of course, since you’re in an elevator car and apparently freefalling... I think we’d all agree on what’s most likely, but I digress).

Anyway, what irks me that this is most definitely not true at the “thought experiment” level of theoretical thinking: if you had two baseballs with you in that freefalling lift, you could suspend them in front of you. If you were in deep space, they’d stay equidistant; if you were freefalling down a shaft, you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.

Of course, they’d be moving slightly toward each other in both cases (because they attract gravitationally) but the tidal effect presents is additional and present in only one scenario, allowing one to (theoretically) distinguish, apparently violating the bedrock Equivalence Principle.

I never see this point raised anywhere and I find it quite distressing, because I’m sure there’s a very simple explanation and that General Relativity is sound under such trivial constructions, but I haven’t been able to find a decent explanation.


You're right that this is glossed over in popular explanations, but the point you make is exactly the starting point for all formal courses and textbooks.

The first part of the argument is that for single point particles falling, the effect of gravity is the same for all particles. This suggests that we should model gravity as something intrinsic to spacetime itself, rather than as a field living on top of spacetime, which could couple to different particles with different strengths.

The second part of the argument, which is what you point out, is that gravity can have nontrivial tidal effects. (This had better be true, because if all gravitational effects were just equivalent to a trivial uniform acceleration, then it would be so boring that we wouldn't need a theory of gravity at all!) This suggests that whatever property of spacetime we use to model gravity, it should reduce in the Newtonian limit to something that looks like a tidal effect, i.e. a gradient of the Newtonian gravitational field. That leads directly to the idea of describing gravity as the curvature of spacetime.

So both parts of the argument give important information (both historically and pedagogically). Both parts are typically presented in good courses, but only the first half makes it to the popular explanations, probably out of simplification.


> it should reduce in the Newtonian limit to something that looks like a tidal effect, i.e. a gradient of the Newtonian gravitational field.

Can you please explain to me how you went from"looks like a tidal effect in the Newtonian limit" to "a gradient of the Newtonian Graviational field"?


"Tidal effects" are defined in terms of having different gravitational fields in one place than another (i.e. the tidal bulge near to the moon occurs because the moon's field is stronger there).


That's not quite true, as illustrated by the tidal bulge opposite the moon.

Tidal forces occur much more due to the difference in the direction of gravity than due to the difference in magnitude.


The elevator car is a thought experiment that draws attention to the equivalence in sensation of acceleration on one hand, and being in a uniform gravitational field on the other hand. As you correctly point out, this particular thought experiment breaks down when you consider that all of the gravitational fields that we are accustomed to are non-uniform, and have apparent tidal forces.

The real principle of relativity is a bit more subtle (sometimes called the strong principle): that the effects of gravity can be explained entirely at the level of local geometry, without any need for non-local interaction from the distant body that is generating the gravitational field. To describe the geometry of non-uniform fields, we need more sophisticated mathematical machinery than what is implied by the elevator car thought experiment, but nonetheless, the elevator example is a useful launching point for that type of inquiry.


Yeah the problem is that that the equivalence principle is a _local_ property that cannot really be expressed precisely in standard English.

Clearly it will fail given a big enough lift to experiment in, since a big enough lift would essentially include whatever object is creating that gravitational pull (or enough to conclude its existence from other phenomena). However these effects are nonlocal, you need two different points of reference for them to work (like your two baseballs). In fact most Tidal forces are almost by definition nonlocal.

The precise definition involves describing curved spacetime and geodesics, but that one is really hard to visualize as a thought experiment. The thought experiment does offer insight though, as it is possible to imagine that, absent significant local variations in gravity, you cannot distinguish between free-fall and a (classical) inertial frame of reference without gravity. This insight provides the missing link that allows you to combine gravity with the laws of special relativity and therefore electromechanics, including the way light bends around heavy objects, which provided one of the first confirmations of this theory.


> you’d see them move closer because of tidal effects dictated by the fact that they’re each falling towards the earth’s centre of gravity, and therefore at (very slightly) different angles.

This point isn't raised anywhere because it's mostly a pedantic point that has nothing to do with the thought experiment. You shouldn't try and decompose thought experiments literally, otherwise you'll get caught up in unimportant details like this. Just assume the elevator is close enough to the earth such that the field lines are effectively parallel, or better yet, just pretend the elevator is in an infinite plate field.


But then again, realizing this problem with the thought experiment is a mark of a sophisticated student. This was the last question on my physics exam in 1991, and I still regret that I went with the simple explanation. I wonder whether the prof was looking for the students who really got it.


The assumption is the acceleration and the gravitation are in the same direction and the same magnitude. The point is that given these two, it's impossible to distinguish the two.

If you think it's sneaky to "implicitly" assume they're in the same direction, I would point out that this is no different from assuming they have the same magnitude. It would be kinda dumb to say "well this 1m/s^2 acceleration can't possibly be equivalent to gravity because gravity is 9.8m/s^2, so the statement is obviously wrong and they're trying to trick me!!"... same thing for direction.


I'm gonna assume that for purposes of the thought experiment you're supposed to envision a point-shaped elevator, not one where you can place two baseballs next to each other.


This was covered on PBS Space Time in an early episode on GR and talked about later as well.



To me, the layperson, the idea that you cannot distinguish whether you are in an accelerated frame or in a gravitational field seems wrong due to a very simple fact.

The force that would be exerted from acceleration versus gravity is different. The force you we think of as gravity comes from a center point that changes with your position while acceleration comes from a uniform direction without regard to your position.


You're thinking of a specific gravity scenario versus a specific acceleration scenario. But the equivalence is true, it was one of the things shown by Einstein.


I wanted to thank everybody who took the time to explain this. Thank you.


I think the elevator scenario is imagining that the earth is a point source, and you are neglecting the (much smaller) gravitational forces for the sake of illustrating a more general phenomenon.


I have two quite bright nieces. When I was explaining the equivalence principle to them, right away they saw that in the gravitational field of the earth there would be tidal effects and in free space with just acceleration, none.

I had to apologize and say that the explanation was over simplified and really it would work, say, only for some creatures living exactly on the floor of the elevator.

One of the two, at a challenging high school, made Valedictorian (surprise to her parents who didn't know she had long been first in her class) then in college PBK, got her law degree at Harvard, started at Cravath-Swain, went for an MD, and now is practicing medicine. Bright niece.


Sort of meta, but I always shudder when someone says that science has "proven" something.

What sets science apart from most other methods of seeking answers is its focus on disproof. Your goal as a scientist is to devise experiments that can disprove a claim about the natural world.

This misconception rears its head most prominently in discussions at the intersection between science and public policy. Climate change. How to handle a pandemic. Evolution. Abortion. But I've even talked to scientists themselves who from time to time get confused about what science can and can't do.

The problem with believing that science proves things is that it blinds its adherents to new evidence paving the way to better explanations. It also leads to the absurd conclusion that a scientific question can ever really be "settled."


Proof never proves it only implies. People are just bad at weighing how much proof there is and how heavily it implies something. Nuance is inconvenient in policy discussion and public discourse.

Science also doesn't seek disproof. It uses both example and counter example to confirm or deny or increase how much one confirms or denies.


Not to be rude, but given current daily attacks on science and the scientific method, I can't let this stand - I think your meta intuition represents a fundamental misunderstanding of how science works.

It is simply wrong to think that scientific questions can never be definitively settled. Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution. There's ample correlative evidence in support of natural selection, but little of the causal data necessary for "proof" (until perhaps recently). In the case of evolution the experiments required to prove that natural selection could lead to systematic genetic change were technically challenging for a variety of reasons.

In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".

Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".

My point is that there are plenty of examples in science where things have been proven -- DNA carries genetic information, DNA (usually) has a double stranded helical structure, V=IR, F=Ma, etc. And there are things that are highly likely, but not "proven", e.g., human activity causes of climate change.

While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.


I think this goes against the basis of the scientific method. There is a reason why they say everything is a hypothesis and nothing is every proven. Anyone can propose an alternative model explaining something you call proven; you calling something proven does not inherently make that explanation correct.

This is not mutually exclusive with being against the attacks on science. Just because we shouldn't treat things as proven doesn't mean we can't come to a general consensus on a topic and act as if it was true. Climate change is real. Evolution is real. Don't inject yourself with bleach. Having a small number of quacks say 'its just a hypothesis and actually god is responsible for climate change and evolution' without any evidence doesn't change the general consensus and doesn't mean we have stop everything until we prove the negative.

Ultimate I think most of us agree in principle. Most of what we're discussing here is minor semantic differences in vocabulary.


Everything in science remains open to be disproved; it wouldn't be science otherwise. That's one way science is different from pure math. Or from religion for that matter.

That said, it is indeed annoying when people who don't understand science interpret "open for disproof" to mean "it's easy to disprove." Quantum mechanics and the second law of thermodynamics could in principle be disproven, but the evidentiary burden would be extremely high. (Insert obligatory Carl Sagan quote here.)


> Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done.

No. What is the basis for these claims?

They're both wrong.

It's not true that CO2 increase is necessary for global warming. If the sun got a lot hotter, global temperatures would rise. If non-CO2 GHGs increased, global temperatures would rise. If the overall albedo of the planet changes, global temperatures can rise. There are literally thousands of things that could cause the temperature to rise.

It's also not true that CO2 increase, holding everything else constant, would lead to long term or even medium term warning. We have no idea what the ecosystem will do for any given change in CO2 levels, since there are countless species both who are net producers and net consumers of atmospheric CO2, all of whom have exponential growth and feedback loops.

Even still, even since both of those claims are wrong, CO2 increase may still cause global warming.

Furthermore, the things you claim are proven, are not proven, they are true by definition. All molecules carry information, and the fact that DNA carries genetic information is a direct consequence of the fact that it is DNA. V=IR by definition. F=ma by definition. There's no such thing as a "force" or "mass" or "acceleration" entity per se, these are metrics that are by definition equal in a given physical framework.

There is no way to 'technically' prove anything in science, and the reasons are simple:

(1) The past is gone - you can't access it

(2) You can't see the future

(3) Your knowledge of the present is extremely limited and inaccurate

These are the limitations of the real world, and science does its best to provide utility within that. It only focuses on making future predictions using the observed past as evidence, because you only can do that. You can't check your model in the present, because you can't instantaneously observe anywhere you aren't already observing. Checking your model on the past relies on what you think happened, i.e. what allegedly happened, but there is absolutely no way to truly know.

You can't even really prove anything 'novel' in mathematics, which is the only place where you can actually prove anything, but even there all proofs are effectively just framing something that was already implied axiomatically in a way that allows our limited human minds to see the relevant/useful patterns that aren't immediately obvious to us.

My point is, acting as though you can truly prove anything in science,

> what's really absurd is to think that no scientific questions can be settled

is not only wrong, but in my opinion is a distraction from what science is actually for. It's not about settling questions. Science is never settled, and that's part of what's beautiful about it. It's about reducing our own ignorance and proving our past selves wrong, discovering patterns and models that equip us with the knowledge to build a better world for ourselves and the rest of humanity.

Why lie about being a great soccer player when you're already great at basketball? Let's focus on the beauty of science as a great journey of growth and exploration that accelerates the progress of humanity, instead of trying to make it do something that isn't possible in the real world.


> No. What is the basis for these claims?

"Science", as it is represented in the media, and in turn repeated and enforced (not unlike religion, interestingly) on social media and in social circles.

As opposed, of course, to actual science.

"Perception is reality." - Lee Atwater, Republican political strategist.

https://www.cbs46.com/news/perception-is-reality/article_835...

https://en.wikipedia.org/wiki/Lee_Atwater

"Sauron, enemy of the free peoples of Middle-Earth, was defeated. The Ring passed to Isildur, who had this one chance to destroy evil forever, but the hearts of men are easily corrupted. And the ring of power has a will of its own. It betrayed Isildur, to his death."

"And some things that should not have been forgotten were lost. History became legend. Legend became myth. And for two and a half thousand years, the ring passed out of all knowledge."

https://www.edgestudio.com/node/86110

Threads like this one, and many others like it, well demonstrate the precarious situation we are in at this level. Imagine the state of affairs around the average dinner table. Although, it's not too infrequent to hear the common man admit (which is preceded by realization) that they don't know something. As one moves up the modern day general intelligence curve, this capability seems to diminish. What the exact cause of this is a bit of a mystery (24 hour cable propaganda and the complex dynamics of social media is my best guess) - hopefully someone has noticed it and is doing some research, although I've yet to hear it mentioned anywhere. Rather, it seems we are all content to attribute any misunderstanding that exists in modern society to Fox News, Russia, QAnon, or the alt-right. I'm a bit concerned that this approach may not be the wisest, but I imagine we will find out who's right soon enough.


> I think your meta intuition represents a fundamental misunderstanding of how science works.

It sounds to me like the grandparent is 100% correct.

> It is simply wrong to think that scientific questions can never be definitively settled.

They made no such claim, speaking of intuition.

> Clearly there are some hypotheses that have been difficult (and may be impossible) to prove, for example, Darwin's idea that natural selection is the basis of evolution

I've seen very little evidence in online discussions (Reddit for example) among armchair scientists that the theory of evolution is anything short of cold, hard, scientific fact.

> In the case of climate change, the problem again is that the evidence is correlative and not causal. Demonstrating a causal link between human behavior or CO2 levels and climate change (the gold standard for "proof") is technically challenging, so we are forced to rely on correlations, which is the next best thing. But, you are right, it is not "proof".

Is this (it is not proven) the message they're sending when they say things like "The science is in", just as one example?

> Establishing causality can be difficult but not impossible - the standard is "necessary and sufficient". You must show necessity: CO2 increase (for example) is necessary for global warming; if CO2 remains constant, no matter what else happens to the system global temperatures remain constant. And you must also demonstrate sufficiency: temperatures will increase if you increase CO2 while holding everything else constant. Those are experiments that can't be done. As a result, we are forced to rely on correlation - the historical correlation between CO2 and temperature change is compelling evidence that CO2 increases cause global warming, but it is not proof. It then becomes a statistical argument, giving room for some to argue the question remains "unsettled".

This is not the message I've heard, at all, from any mainstream news source, and it's certainly not the understanding of 95% of "right minded" people I've ever encountered.

> While some of the issues you bring are remain unproven, what's really absurd is to think that no scientific questions can be settled.

What's even more absurd, to me, is how you managed to find a way to interpret his text in that manner. And you're obviously (based on what you've written here), a genuinely intelligent person. Now, imagine how the average person consumes and processes the endless stream of almost pure propaganda, from both "sides" on this topic and many others.

The unnecessarily dishonest manner in which the government and media have chosen to represent (frame) reality to the general public has left an absolutely massive number of easily exploitable attack vectors for "conspiracy theorists" to exploit. And if you are of the opinion that all conspiracy theorists are idiots so you have nothing to worry about, consider the possibility that this too has been similarly misrepresented to you.

If a society chooses to largely abandon things like logic and epistemology in the education of its citizens, thinking propaganda is a suitable replacement, don't be surprised when things don't work out in your favor. If we can barely manage such things here, why should we expect Joe and Jane six-pack to somehow pull it off?


See no evil, hear no evil, speak no evil.

Amen.


It kind of floors me that we're taught science the way it is. Much simpler: Karl Popper's conjecture and refutation. So I tell people that science mandates "I believe something, so I should try to prove it wrong." I think understanding that is significantly more beneficial than repeating the arbitrary n- steps of a scientific method. It's two steps. Keep it simple.


One thing that rubs me the wrong way about this "no proof ever, only disproof" attitude is that it advantages the new hypotheses too much.

Any hypothesis that I invent at this very moment, is from this perspective in the best position a hypothesis can ever be. There is no disproof. There is even no coherent argument against it, because I literally just made it up this second, so no one had enough time to think about it and notice even the obvious flaws. This is the best moment for a hypothesis... and it can only get worse.

I understand that there is always a chance that the new hypothesis could be correct. Whether for good reasons, or even completely accidentally. (Thousand monkeys with typewriters could come up with the correct Theory of Everything.) Yes, it is possible. But...

Imagine that there are two competing hypotheses, let's call them H1 and H2.

Hypothesis H1 was, hundred years ago, just one of many competing options. But when experiment after experiment was done, the competing hypotheses were disproved, and only this one remained. For the following few decades, new experiments were designed specifically with the goal of finding a flaw in H1, but the experimental results were always as H1 has predicted them.

Hypothesis H2 is something I just made up at this very moment. There was not enough time for anyone to even consider it.

A strawman zealot of simplified Popperism could argue that a true scientist should see H1 and H2 as perfectly equal. Neither was disproved yet; and that is all that a true scientist is allowed to say. Maybe later, if one of them is disproved in a proper scientific experiment, the scientist is allowed to praise the remaining one as the only one that wasn't disproved yet. To express any other opinion would be a mockery of science.

Of course, there always is a microscopic chance that H1 might get disproved tomorrow, and that H2 might resist the attempts at falsification. But until that happens, treating both hypotheses as equal is definitely NOT how actual science works. And it is good that it does not.

In actual science, there is something positive you are allowed to say about H1. Something that would make the strawman zealot of simplified Popperism (e.g. an average teenager debating philosophy of science online) scream about "no proof ever, only disproof". The H1 is definitely not an absolute certainty. But there is something admirable about having faced many attempts at falsification, and surviving them.


I've not read Popper directly, so I'd be interested in his actual argument on this.

But, I wonder if you can describe H1 as being a stronger hypothesis than H2 by virtue of withstanding more and higher quality attempts to disprove it?


"Withstanding more arguments" can be gamed by throwing thousand of silly arguments at your favorite hypothesis. And "higher quality" is the part people would disagree about.

I think that when people are essentially honest and trying to find out truth, they can agree on reasonable rules. But there is no way to make the rules simultaneously philosophically satisfactory and bulletproof against people who are willing to lie and twist the rules in their favor.

For example, in real life you usually cannot convince crackpots about being wrong, but that is okay because at some moment everyone just ignores them. If you try to translate this into a philosophical principle, you end up with something like "argument by majority" or "argument by authority". And then you can have Soviet Union where scientific progress is often suppressed using these principles. But what is the alternative? No one can ever be ignored unless you disprove their hypotheses according to some high standard? Then the scientific institutions would run out of money as they would examine, using the high standard, the 1000th hypothesis of the 1000th crackpot.


This is an age-old domain of thought known as philosophy of science [0]. Although, by prepending your post as "meta", perhaps you are already aware of it.

I should add: As a human being, it is probably impossible to separate the scientist from the philosophy in which they explore, proceed with, and promote their work. In some cases, it might not be something they are even aware of. Instead, the scientific system (as a sort of world institution) should itself be designed to always seek out and protect truth, regardless of prevailing contemporary knowledge.

[0] https://en.wikipedia.org/wiki/Philosophy_of_science


> Your goal as a scientist is to devise experiments that can disprove a claim about the natural world.

If this claim was true, it would disallow science to make true claims, because no experiments can disprove such claims. Truth is a delicate matter and can't be handled by simple methods. Questions may not be settled, but they can be difficult to challenge.


> it would disallow science to make true claims

Isn't that exactly how science work? It does not make true claims. It produces statements with disclaimers. If this and this then Y is true, as long as we don't observe Y.

You cannot use the scientific method to definitely say: "X is true".


Yes, that's exactly how it works. Every scientist I've ever asked has stood by that. Science dispels untruth.


I don't know if this would be my "one question" if I could ask the most brilliant minds in science, but something that always bothered me:

When I took physics they basically said "at first scientists were disturbed by the fact that magnets imply that two objects are interacting without any physical contact, but then Faraday came along and said 'the magnets are actually connected by invisible magnetic field lines' and that resolved everything."

How does saying "but what if there's invisible lines connecting them" resolve anything? To be clear, I'm not objecting to any of the actual electromagnetic laws or using field lines to visualize magnetic fields. It's just that I don't get how invoking invisible lines actually explains anything about how objects are able to react without physical contact.

(Also, it is not lost on me I that this question boils down to "fraking magnets, how do they work?")


I'm a physicist specifically working with magnetic systems, but I have very little pre-graduate teaching experience, so take this attempt to answer the question with a grain of salt.

The reason some people regard Faraday's original explanation of the eponymous law (it is worth noting that at the time it was widely regarded as inadequate and handwavy) as illuminating is because Faraday visualized his "lines of force" as literal chains of polarized particles in a dielectric medium, thereby providing a seemingly mechanistic local explanation of the observed phenomena. Not much of this mindset survived Maxwell's theoretical program and it has very little to do with how we regard magnetism today. Instead, the unification of electricity and magnetism naturally arises from special relativity, whereas the microscopic basis of magnetism requires quantum mechanics. There isn't really any place for naive contact mechanics in the modern picture of physics, so in that sense I would regard Faraday's view as misleading.

Finally, I can't end any "explanation" of magnetism without linking the famous Feynman interview snippet [1] where he's specifically asked about magnetism. It doesn't answer your question directly, but it's worth watching all the more because of it.

[1] https://www.youtube.com/watch?v=MO0r930Sn_8


That interview is so good! What a dick, but what a teacher!


What I see at the beginning of that video is somebody who doesn’t want to spend the energy answering a complex question. Then, in the process of dismissing the question he gets drawn in and can’t help himself from really getting into it.

I don’t know anything about Feynman beyond vaguely associating his name with science, but watching this makes me want to seek out more from him.


You're in for quite a treat then. It sounds like you might have more of an interest in his technical work and scientific contributions and teaching materials (of which there is plenty, and of high quality), but personally I quite enjoyed this book of his as well: https://en.wikipedia.org/wiki/Surely_You%27re_Joking,_Mr._Fe...!


That Feynman snippet was so awesome. Flawless. Thanks for sharing it.


It changes the problem from one of action at a distance to local interaction. Instead of "field lines", I would say "field". Field lines are just one way of visualizing a field.

So, if we don't have the notion of fields, then we have a kind of situation of how does object A know about remote object B. Like how does one object know about the motions of literally every other object in the Universe. Perplexing.

Once you come up with the idea of a field, okay you have to at some level accept that there are fields that permeate all of space. But what this intellectual cost buys you is that now an object only has to sense the field local to it to respond to all objects in the universe.

Think of objects bobbing on the ocean. One way to conceptualize that is that any object anywhere could cause this object here to bob in some way. How does this object know about all the other objects? Instead we could say that there is ocean everywhere. Locally, objects bobbing put ripples into the ocean. Locally, ripples cause objects to bob. Each object no longer needs to "know about" every other object it just needs to react to the ripples at its location, and the ripples get sent out from its location.

Does this help?


This, gravity, and quantum mechanics, I think are things that people just accept as is, I don't think anyone really knows how or why it works, it just works. It could be that our brains are not wired to understand how two things can be pulling each other without anything physically connecting them. My knee jerk explanation is that we live in a simulation, and that the simulation is not in anything like a physical world, and we are just not wired to grasp it, just like an ant won't be able to learn learn calculus.


Gravity and magnetism are two phenomena that I always imagined could actually be explained in higher dimensions (except we can't see those higher dimensions so it'd be just speculation).

Imagine two circles in 2D that repel each other the closest you get them together, like magnets do. In 2D it would look like they're interacting at a distance, but maybe in 3D they're two cylinders that are a bit flexible, that are actually touching at the ends, but not in the 2D plane you're observing. The interaction is "properly physical" in 3D but in the 2D plane it seems magical.

That's a way that I imagine it in 2D vs 3D, so this might be similar in 3D vs ND, where N > 3. Of course this is all baseless speculation, but it seems kinda plausible in my head.

Edit: bad drawing of what I meant: https://imgur.com/362tcHg


There are models using higher-dimenions (and lower dimensions, and factional dimensions) to model physical phenomena, but the problem with a lot of them is that you end up with the "fitting n points to an n degree polynomial" problem. It's trivial to create a model that agrees with all observed data, it's a lot harder to make a model that accurately predicts unseen data.


Fascinating thought! Makes sense though. Given n inputs and outputs, you could literally make an if-statement that solves nothing but gives you the 'right answer'. Actually understanding the fundamentals (often unseen) of a problem to then create an elegant, holistic solution that almost prophetically seems to predict the future is the current peak of human thought achievement, IMO.


Gravity is explained that way, it is a curvature in spacetime (4D): https://www.youtube.com/watch?v=CvN13ZE544M


I'm no expert in the matter, but I guess the problem is somewhat solved as the magnet then interacts with the field lines and these with the other magnet. It depends on how real you believe these invisible magnetic lines are. The magnetic field turned out to be a very real entity, as when it is perturbed these perturbations travel at the speed of light, and the second magnet feels them with a slight delay. The perturbations also manifest themselves as EM radiation/light.


It's local (propagates continuously through space, no faster than the speed of light) and if you want you can view it as mediated by particles (photons), although just viewing it as the EM field is fine too (and certainly no less local). So is gravity, which was spooky action at a distance for even longer, once you invent GR.


The answer is related to Relativity. Read this: https://ocw.mit.edu/courses/materials-science-and-engineerin...


I don't think the invisible 'lines of force' really resolved anything in the minds of the 19th century scientists, but what eventually did was acceptance of Faraday's speculation that the lines of force were physically real and existed as some change of state in some medium that existed throughout space.

Maxwell picked up this idea and ran with it, developing a mathematical theory for the dynamics of the electromagnetic field. Instead of one object somehow magically interacting at a distance, interactions between objects resulted from changes in the electromagnetic field that propagated through space.

The final paragraphs of Maxwell's "Treatise on Electricity and Magnetism" are somewhat relevant.

This is 30-40 years after Faraday first wrote about lines of force, and there still wasn't really consensus about how to explain electromagnetic phenomena.

[emphasis added by me]

> Chapter XXII: Theories of Action at a Distance

> ...

> There appears to be in the minds of these eminent men, some prejudice, or a priori objection, against the hypothesis of a medium in which the phenomena of radiation of light and heat, and the electric actions at a distance take place. It is true that at one time those who speculated as to the causes of physical phenomena, were in the habit of accounting for each kind of action at a distance by means of a special aethereal fluid, whose function and property it was to produce these actions. They filled all space three and four times over with aethers of different kinds, the properties of which were invented merely to 'save appearances,' so that more rational enquirers were willing rather to accept not only Newton's definite law of attraction at a distance, but even the dogma of Cotes, that action at a distance is one of the primary properties of matter, and that no explanation can be more intelligible than this fact. Hence the undulatory theory of light has met with much opposition, directed not against its failure to explain the phenomena, but against its assumption of the existence of a medium in which light is propagated.

> We have seen that the mathematical expressions for electrodynamic action led, in the mind of Gauss, to the conviction that a theory of the propagation of electric action would be found to be the very key-stone of electrodynamics. Now we are unable to conceive of propagation in time, except either as the flight of a material substance through space, or as the propagation of a condition of motion or stress in a medium already existing in space.

> Hence all these theories lead to the conception of a medium in which the propagation takes place, and if we admit this medium as a hypothesis, I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its actions, and this has been my constant aim in this treatise.


Crypto and practical security. I get tired of the circular “don’t roll your own crypto unless you’re qualified”. How does one become qualified? I don’t feel like I know how to evaluate many of the arguments people make for or against technologies people argue about on HN, such as Signal or different password managers. I feel like “security through obscurity” is a bad thing, and “layers of security” are a good thing, but isn’t all security obscuring something, and how does one evaluate whether a layer is adequate? “Just use bcrypt” - okay, help me understand!


The reason people say not to roll your own crypto is that there is no secret answer to making things secure, we just have smart and creative people bash their heads against a crypto protocol/implement for a long time and hope we found all the problems.

So unless you have a good reason to do something else, and the budget to pay experienced people to bash their heads against it, you should stick to an implementation that has had this effort expended on it.

If you want an intro about common problems in custom cryptosystems, go look at cryptopals or something, but don't get too cocky that you know everything.


It's also easy to dramatically underestimate the order of magnitude of effort involved in "the budget to pay experienced people to bash their heads against it".


Also, what makes me irritated about this blurt is that there are many "layers" of what people could reasonably call "crypto". There are the cryptographic primitives. There are higher-level crypto algorithms and functions that use those primitives. There are even higher-level cryptographic protocols, file formats etc. Then there's actually the application, applying crypto to a real-world problem.

Even in each of those, there are two "levels" of implementation: specifying an exact algorithm that implements a solution to problem x, and actually producing the code that implements the algorithm.

At some level, there is no ready-made solution to every problem. Even if the foundations are implemented by "somebody else", the line's blurry. At which level of (lack of) expertise and which level of "lowness" of the implementation should I start to worry?


> I get tired of the circular “don’t roll your own crypto unless you’re qualified”. How does one become qualified?

Oh, by all means, roll your own crypto, break it, and roll it again. Just do not use it.

Also, break other people's crypto and study theory.

By the way, the advice is not "unless you are qualified". Nobody is qualified to just roll their own. Good crypto is a community project and can not happen without reviewers.


From what i understand, the original context of "security through obscurity=bad" is that its really hard to keep secrets, and its hard to design secure systems, so peer review is really helpful. Thus if the security of your system relies on it being secret, you are probably in a bad place because its hard to keep something so big secret, its hard to redesign the system if it leaks, you probably had less people look at it in order to keep it secret. This is in contrast to just having a password or key secret. You can easily change a password if it gets leaked. You can keep a small password secret much easier than the design of the whole system, etc.

More generally, security is like any other field. You have to evaluate arguments based on the logic and evidence given. The main difference is that with crypto, it is much easier to shoot yourself in the foot and have catastrophic failure, since you have to be perfect and the attackers just have to be right once to totally own you. Thus the industry has standardized on a few solutions that have been checked really really well.

More generally, if you are interested, i would say read the actual papers. The papers on bcrypt, argon2 etc explain what problems they are trying to solve, usually by contrasting with previous solutions that have failed in some fashion. That doesn't mean reading the paper will explain everything or make you an expert or qualify you to roll your own crypto. Nor should you believe just because a paper author says something is a good idea that it actually is. It will however explain why slow hash function like bcrypt/argon2/scrypt were created and are better choices than the previous solutions in the domain like md5.


> I get tired of the circular “don’t roll your own crypto unless you’re qualified”.

It's true, but you need to realize that you're qualified enough only when you understand that you shouldn't roll out your own crypto.

In my opinion, the only person who has credibly demonstrated being able to roll his own crypto is djb (http://cr.yp.to/)

> but isn’t all security obscuring something,

Keeping a secret isn't "obscuring" something, it's hiding it entirely. Security through obscurity is bad because it relies on attackers being dumb. The smartest person in the world cannot be expected to guess a well chosen and kept secret.


You should study cryptanalysis. This is why rolling your own crypto is dangerous. Not just because the result is going to be insecure, but also because it isn’t particularly educational, but it feels like it is. It is easy to convince yourself you know more than you do if you spend a lot of time playing with bad crypto systems.

Edit: I should add that even if you are an expert in cryptanalysis, you still shouldn’t just roll your own crypto. It’s the analysis of the entire community, not the credentials of the author, that makes modern cryptography so strong.


The proper way of interpreting the sentence about "don't roll your own crypto" is that it actually means "don't roll out your own crypto until it has been peer reviewed by many experts". At which point it kind of stops being "your own", in a way.


I don't see it mentioned, but I thought I'd chime in. Even if your crypto algorithm is perfect and works infinitely fast, there's still the problem of implementation. And that's usually not perfect and often leads to practical gaps that can be exploited. During WWII, the German Enigma machines were broken in part due to design errors (like letters wouldn't be encoded to themselves) and user error (like sending messages that start/end the same way). Even if crypto is some day perfect in a sense, it may still be used in imperfect ways that allow one to break it or circumvent it entirely.


I recommend Serious Cryptography by Jean-Philippe Aumasson. After reading it, you will gain enough understanding to compose cryptographic primitives and build your own secure system based on well-known best practices, as long as you don't deviate too much from the golden paths. Although with that, you still won't know how to design or implement these primitives yourself. It's like having a nice toolkit of screwdrivers, hammers, spanners etc to build your thing, but you can't build those tools themselves.


> I get tired of the circular “don’t roll your own crypto unless you’re qualified”

It's not circular, it's a simple flowchart.

Are you writing an app or are you trying to invent more advanced crypto?

"writing an app" -> dont roll your own crypto

"invent more advanced crypto" -> go learn and research crypto history, math, etc..


If you have a good CS background, I highly recommend the lecture notes for the security class I took in undergrad: https://inst.eecs.berkeley.edu/~cs161/sp10/

That's from 10 years ago, so you might be able to find video of a more recent version; try to find a year when Wagner taught, he's great.


Spring had made "Understanding Cryptography" available for free https://link.springer.com/book/10.1007/978-3-642-04101-3


> How does one become qualified?

By attacking crypto--a lot. And submitting your crypto to be attacked by others--a lot. It's the only way to develop the requisite level of humility to design good crypto.


Has someone that thought they were taking LSD ever turned into a permanent schizophrenic zombie or in a mental institution, or is it all urban legend. If someone that didn't know they were predisposed to mental illness, is it applicable to dismiss their experience in order to maintain how safe LSD is?

If any of this is true, are there any sources aside from "my friend's friend's brother took too much and now he is....", and what is the scientific explanation and do we know enough about the mind at all?

I feel like LSD has a lot of contradictory information out there, and the proponents feel the need to hand waive concerns away because it is 'completely harmless and leaves your system in 10 hours'. But when nobody knows what they're actually getting because it doesn't exist in a legal framework, then it muddies the whole experience.

People say certain doses can't do more effect than lower doses after a certain threshold. It seems like the same people say "omg man 1000ug you are going to fry your brain!"

What is the truth? If it "just" had an FDA warning like "people with a family history of schizophrenia should not take it", that would be wildly better than what we have today.

Please no explanation about shrooms. Just LSD the 'research chems' distributed as LSD.


"Has someone that thought they were taking LSD ever turned into a permanent schizophrenic zombie or in a mental institution, or is it all urban legend."

Tangential, and not an answer to your question, but if you're like me, you will be fascinated to learn that there is a drug (MPPP, synthetic opiate) that if cooked incorrectly yields "MPTP"[1] which will give you Parkinsons. As in, forever. You take this drug (at any age) and then you have Parkinsons for the rest of your life.

[1] https://en.wikipedia.org/wiki/MPTP


so therefore it stands to reason that other substances can exist that rewrite your mind to make you ineffective at other behaviors


I mean, not really.

1. There is one substance that rewrites your mind.

2. There is more than one substance that rewrites your mind.

Are very different postulates.


1 opens the existence of 2 as possible


When I looked up into illegal drugs, I found it very difficult to find reliable data.

On one hand you have anti-drug people, usually backed by the authorities. Listen to them and all drugs will make your body rot, give you hallucinations like datura, and for some reason cause complete addiction after a single dose.

Drug users on the other hand will tell you that it not as bad as alcohol/tobacco/coffee/... that concerns are unfounded, that police is the only risk, etc...

The truth is almost impossible to find. Even peer reviewed research is lacking. I guess there are several reasons for that. Availability of controlled substances. Ethical concerns regarding experimentation. Issues with neutrality.

Now from what I gathered about LSD (and psychedelics in general): these are very random. If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more. But it can also fuck you up for years, or maybe bring significant improvement in your life. High doses increase the chance of extreme effects and nasty bad trips, but it shouldn't kill you unless you are dealing with industrial quantities. The substance itself is not addictive, but the social context may be. The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.

As for fake LSD, there are cheap reagent tests for that. They are not 100% reliable but that's better than nothing. You can also send your sample anonymously to a lab that will do a much more accurate GC/MS analysis for you.


> Drug users on the other hand will tell you that it not as bad as alcohol/tobacco/coffee/... that concerns are unfounded, that police is the only risk, etc...

Sure, some ("plenty", in absolute numbers) will tell you this, but I don't recall being in many forums where that attitude doesn't get significant pushback (as opposed to the anti-drug community). The modern "pro drug" community has a fairly significant culture of safety within it, unlike back in the sixties.

> The truth is almost impossible to find.

There is plentiful anecdotal evidence online. Any clinical evidence, if they ever get around to doing it in any significant volumes, will be utterly miniscule (and I highly doubt more trustworthy, considering what you're working with, and the size of the tests that will be done) to the massive volume of trip reports and Q&A available online, much from people who know very well what they're talking about, not unlike enthusiasts in any domain.

> Now from what I gathered about LSD (and psychedelics in general): these are very random.

Depends on one's definition of random.

> If you take a reasonable dose, you are most likely going to have a nice, fun trip and nothing more.

Effects vary by dose of course, but I've seen little anecdotal evidence that suggest high doses have a different outcome, and plenty that suggests the opposite.

> But it can also fuck you up for years, or maybe bring significant improvement in your life.

See: https://rationalwiki.org/wiki/Balance_fallacy

> The big problem is that there is no way to tell how it will go for you. There are ways to improve your chances, but it will always be random.

I believe this to be true, but don't forget the fallacy noted above.

That said, these things are not toys - extreme caution is warranted.


Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.

I had a fling with psychedelics in my teens, and everything was great until the one time it wasn't. I was taking psychedelics pretty much every weekend, and by my count have tried over a dozen of them.

Had an experience with LSD which completely shook me to my core and gave me such severe PTSD and trauma that every night I started to have massive panic attacks and needed medical help. My entire worldview and perception of reality was shattered, I wasn't able to "anchor" myself anymore and it all felt like a sham. I was completely dissociated. I also got HPPD: to this day, everything has a sharpened oil-painting type texture to it that increases based on my anxiety level, and I'm sensitive to visual + aural stimuli (loud, brightly-colored places are unpleasant). If I get too anxious, I start to dissociate.

It took ~2 years for the PTSD to subside for the most part, but still if I am under a lot of stress I am liable to have a panic attack and get flashbacks and need to go find somewhere quiet to sit somewhere alone to try to work through it.

LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.

But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.

It's been a long time since I've touched any of that stuff and I'm not sure I ever will again, but I don't think it's inherently bad or good. Psychedelics are like knives, they're neutral - can be used as a tool or cut the hell out of you if you're reckless.

---

Footnote: For context, this was probably due to life circumstances/psyche at the time. I was in a relationship with a pretty toxic partner, and my mental state wasn't the greatest. In hindsight, it seems like I was almost begging for a "slap in the face" if you will.


If you don't mind me asking (and this is clearly a sensitive topic so feel free not to reply): What do you mean by PTSD and flashbacks? As in, was the trip so bad that remembering it creates anxiety, or are you reliving unrelated traumatic memories that weren't an issue to live with before using the drug?


It literally feels as if I'm being transported back to that same night, starting to relive it all over again. It's entirely illogical but if you knew what happened it might make more sense (happy to elaborate here and give a brief description of what happened/why it messed me up so bad, I'm perfectly okay to talk about it now).


I'd be interested to hear your story. I've never used psychedelic drugs, but I find their effects fascinating.


I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.

Well, that night went bad. Really, really, life-alteringly bad. For the first time, I had a bad trip. And not like, some mildly uncomfortable thoughts. I got a bad feeling in my stomach from the moment I dosed, and I knew something was going to be different this time.

As I started to come up, the bad feeling and a dark presence grew, and I pulled out my phone. I started a timer, and I watched as the time slowed to a point where it completely stopped. I started looping, I would get up off the couch, walk a few feet, and be teleported back. Over and over.

I realized that I had gotten so high, that time was no longer moving. And if time was not moving, I could maybe never come down. I was stuck here forever. And then the hellish nightmare started.

I felt like I was losing control of myself, like something else was trying to take over, and whoever won the battle, that is the consciousness that would exist. The more I fought, the more painful things got. Pain the likes of which I no one can physically imagine.

Went upstairs and laid down in my bed, began going out of body. I started dying over and over in unimaginable ways in my head, trapped in loops. Pain beyond anything I've ever felt in reality, there was no limit. It was tied to my breath, I realized that it had been so long since I had breathed, I kept forgetting who I was and what was going on, and then I would catch a slight glimpse and remember and fight so hard to take another breath. And there was so much pain in fighting to "survive" and hold on to who I was.

Eventually, the pain/struggle became too much, and I "gave in" and said "okay, I give up, you win, I can't take it anymore, I'd rather die." And that's when it's stopped. There appeared this giant shape of light/energy that was every color at once, and colors we don't have words for, and it "touched me" (could have been me moving towards it, or it towards me, there wasn't really a concept of this).

When it "touched" me, what it "showed" me was something I later learned is called an "Ouroboros", the snake eating it's tail. It showed me what "infinity" really meant, and that was too much to handle and shattered my psyche.

In that moment my body/mind/soul felt like it was obliterated to pieces by some energy beam in the most excruciating, searing pain, and I woke up in my bed having just pissed myself.

It took a long time to piece myself back together after that one.

---

There are a lot of details I've omitted for brevity's sake, but this captures the gist of it.

The majority of my trauma has to do with anything related to loops: think Nietzsche's Eternal Return, general time-loops, fear of time-stopping, etc.

When I have panic attacks I have to stop myself from starting a stopwatch on my phone to make sure time is still moving because it'll cause a feedback loop and ratchet-up the panic, causing the time-dilation to increase in a vicious cycle.


Holy. That sounds intense, to say the least. A part of me likes to experience this, even though you made it abundantly clear, that it did not have a positive impact on your life.


"ego death" is a common aspect of acid trips and the experience seems to come down to your willingness to relinquish control. this reads like what was described. if you were to look up that term you'll see others that will feature similar features - with or without pain, with or without worry.

not having a reliable way to know exactly what you took can amplify the anxiety, when your brain starts filling up with seratonin and whites everything out just like people on their deathbeds report, are you supposed to let go? when your sense of self has been obliterated and the next moment you are in the body of another mammal lost and confused in the forest for an entire lifetime before being transported back into your body and only a minute has gone by - but your trip is to last another 9 hours, should you fight it? Distinct neural networks in your mind that never communicate are now connected, vestigial components of the mind are now being expressed, are you being replaced in a firmware dump and flash?

a lot of people have a friend with them to guide them through an acid trip because trips can be steered with sounds and words, simple chimes, melodies.

would it have helped? very hard to say. but as the author wrote, the bad day and uncomfortable setting did not help. It is similar to a dream state (just radically more intense), where the things on your mind and also happening around you can affect the direction of your dreams.


Yeah, I think it entirely had to do with my inability to relinquish control and "just let go". Although in this context, that was literally what felt like the fight to survive, instead of "being chill". Ego death commonly is either the most horrendous or most nirvanic thing depending on how readily someone gives in.

> when your sense of self has been obliterated and the next moment you are in the body of another mammal lost and confused in the forest for an entire lifetime before being transported back into your body and only a minute has gone by - but your trip is to last another 9 hours, should you fight it?

There was a lot of this, during that out-of-body-period. I existed in multiple places/points in time at once as different people of various ages/genders/nationalities and then occasionally as animals, and lived entire simultaneous lifetimes. At one "time", in places + times A, B, C, D as different living things. Really does a number on your sense of self for a bit, heh.


and that had never happened to you in your other trips?


No, was really strange, I was pretty experienced by then too. Was never the same after.


Did you take 300ug before?

Sorry for the questions, we can talk about it somewhere else, just add an email or protonmail account to your hackernews account I'll mail you there


Did you try LSD after that time in your teens?

I think it is a bit too reductive to say they're neutral, just yet, but I am willing to say they can be used responsibly if the right information actually existed - but like with any science I am open to changing that if the conclusions were found to be different. Again let's just stick with acid instead of all psychedelics.


I tried once or twice after that, both times it turned immediately into flashbacks and led to horrid experiences so I called it quits for good.

I had done it probably ~20-25 times by that point, along with a bunch of other stuff.

    LSD
    Mushrooms
    2C-B
    2C-C
    2C-I
    DMT
    4-AcO-DMT
    5-MeO-DMT
    5-MeO-MiPT
    DOM
There might be some others I've forgotten, it's been a long time.

> Again let's just stick with acid instead of all psychedelics.

What you won't find in academia or textbooks is that, at a high enough dose, all psychedelics feel the same. You reach a point where it's indistinguishable and the unique properties vanish. It's hard to describe if you don't have experience with a bunch of them, but there's this "peak psychedelic state" where they all sort of converge, which is what I only assume is the result of your serotonin receptors getting completely bombed/saturated.

Personally, I was much more of a fan of phenethylamine psychedelics (particularly the 2C series), they're more clear-headed and "light"/enjoyable. The time dilation from psychedelics makes the 12-16 hours from LSD feel like days, and by the end of it, generally the last 4-6 hours you just want to be finished with it already.

It's really difficult to make a blanket-statement like "can be used responsibly" about psychedelics, because it's a dice roll. No matter how cautious you are, there's always the possible that this time, things go sideways. Though most people (when I was in that scene as a teen) couldn't really empathize after my bad trip because they'd never had one, so it's a rare occurrence. Maybe I was psychologically predisposed, who knows.

But I do think that people stand to gain a lot from having a psychedelic experience in their life, and from having an experience taking MDMA and talking with someone they love.


> Though most people (when I was in that scene as a teen) couldn't really empathize after my bad trip because they'd never had one, so it's a rare occurrence.

Yeah this is another thing I've seen.

Online there are lots of stories of "bad trips", like this one.

In person its "what happened? I've never had a bad trip [so what's wrong with you]". It is very unscientific, and for the people that do empathize, it is very reductive to "bad trip". No discussion about PTSD. And then you can't talk to anybody else about it because they are illicit substances.


Just some relevant info I looked up after reading your post....

> Permanent schizophrenic zombie, maybe a bit extreme, but severe and traumatic long-lasting psychological damage is a not-uncommon phenomena.

https://english.stackexchange.com/questions/6124/does-not-un...

https://towardsdatascience.com/an-introduction-to-multivaria...

HOW PSYCHEDELICS REVEALS HOW LITTLE WE KNOW ABOUT ANYTHING - Jordan Peterson | London Real --> https://www.youtube.com/watch?v=UaY0H9DBokA

Jordan Peterson - The Mystery of DMT and Psilocybin --> https://www.youtube.com/watch?v=Gol5sPM073k

> LSD being the particular substance has nothing to do with it, in my opinion. I was young, dumb, reckless, and played with fire then got burned. It could have happened with any of the other dozen psychedelics I took, but it just so happened to be LSD the one time that it did.

https://en.wikipedia.org/wiki/Hallucinogen_persisting_percep...

I have a close friend who had the same experience with excessive use of marijuana, but my money would be on psychedelics being far more likely to produce the outcome you unfortunately experienced. He's much better today, but not entirely "ok".

> But I want to add, that while giving me the most nightmarish, traumatizing experience of my life, the best/most positively-profound experience has also been on the same substance. I grew up in a pretty abusive household and didn't do well forming relationships growing up, and had a lot of anger and resentment in my worldview. After taking psychedelics (LSD, 2C-B, Shrooms) and MDMA with the right group of people a few times, my entire perspective shifted. For the first time in my life, it felt like I understand how it felt to be loved, and what "love" was, and how we're "all in this together" so we may as well be good to each other while we're here.

This sounds rather similar to my friend's story.

Can Taking Ecstasy (MDMA) Once Damage Your Memory?

https://www.sciencedaily.com/releases/2008/10/081009072714.h...

According to Professor Laws from the University’s School of Psychology, taking the drug just once can damage memory. In a talk entitled "Can taking ecstasy once damage your memory?", he will reveal that ecstasy users show significantly impaired memory when compared to non-ecstasy users and that the amount of ecstasy consumed is largely irrelevant. Indeed, taking the drug even just once may cause significant short and long-term memory loss. Professor Laws findings are based on the largest analysis of memory data derived from 26 studies of 600 ecstasy users.

> (from your comment below) I took 300ug of LSD recklessly on a particularly bad day for me, in a particularly uncomfortable setting.

https://www.trippingly.net/lsd/2018/5/3/phases-of-an-lsd-tri...

Lots of details, plus dosage guide (25 ug and up) & typical experinces

https://www.reddit.com/r/LSD/comments/34acza/do_you_guys_bel...

imo 300ug is the point where you need to have some serious experience with tripping to be able to handle yourself. because if you're coming up, the acid is already circulating your bloodstream, and you get that horrible sinking sensation of thinking you've taken too much... you're in for a really bad time if you don't know how to control the trip.

I think it's difficult to say how big a dose really is until you've had a bad trip on it. only then can you see how insidious everything can get and as such just how intense 300ug can be. the reason people say not to start on doses like that is so they will AVOID those horrible experiences. so yeah, 300ug is a large dose, just because if shit goes wrong on it then you're fucked.


> Has someone that thought they were taking LSD ever turned into a permanent schizophrenic zombie or in a mental institution, or is it all urban legend. If someone that didn't know they were predisposed to mental illness, is it applicable to dismiss their experience in order to maintain how safe LSD is?

My good friend took "something" once (hard to tell what the dealer is selling you) and ended up in a mental institution, and is now in fact officially mentally disabled and on drugs for life. The drugs keep him stable enough that he's able to work, although he's still just a shadow of his former self.


What was he expecting to have taken?


Head over to the phantasytour forums, place is full of acid casualties.


> But when nobody knows what they're actually getting because it doesn't exist in a legal framework, then it muddies the whole experience.

Trust (knowing the chemist directly, indirectly, ...) in specific individuals > a largely unknown (but known to be imperfect) system, for many people anyways. Obviously this isn't practical for the not well connected, but it's all we got for now.

But as for your question, I've seen little to suggest it's anything more than war on drugs propaganda and hearsay.

https://en.m.wikipedia.org/wiki/Reefer_Madness

https://en.m.wikipedia.org/wiki/Chinese_whispers


yes this is my current predilection, but I can also concede that there are limitations in getting the truth of the matter.

even in this very thread there is someone that has been in the mental hospitals and seen problems "with their own two eyes", but is unwilling to name names as part of a code to remove any social/legal/professional consequence for themselves or the "crazy" people there


Dipping ones toes in the water is always an option, but I heard a rumour psilocybin is the safer route, due to less likelihood of illegitimate product.

If you live in a big enough city, there should be meetup groups where you could meet some people and have long discussions.

Is your interest only curiosity, or is it medical related? Sorry, on mobile and pressed for time so can't scroll through the thread..


This entertaining article lists what happened to the early LSD researchers: https://slatestarcodex.com/2016/04/28/why-were-early-psyched...


entertaining, the comments are a good addendum but are just as speculative. maybe there just isn't good information, its just more of the contradictory information.

"lasting permanent changes, obviously"

"I’ve personally seen several people experience total amnesia after tripping on high doses." No further information.

"Not lasting permanent changes"

what.

Names, sources, medical records, news reports, court cases, there has got to be something out there!


The answer to this is really easy. Go to any mental institution and get to know the patients. An institution where they are allowed out, but are still in an enclosed area. A majority of the patients will have had some sort of heavy drug use in their past. Sure, whats causation and whats correlation, but its pretty clear that drugs cause mental breaks.

How do I know this?

I have mental illness in my family and have spent considerable amounts of time at those facilities.


> Sure, whats causation and whats correlation, but its pretty clear

What? How is it clear? As you wrote yourself, correlation is not causation.


I was just protecting against the inevitable counter argument. The proof is what I have seen with my own two eyes


That's unfortunate. Yes, I think it is clear too, but to so many it isn't that I have to figure out where is the truth? What circumstances cause what? Is it worthwhile to really avoid all psychonauts given the path they are on? For the ones with positive experiences on the total other side of the spectrum that take them into the spiritual mumbo jumbo, is this worth listening to? Is there a functional difference between the path the microdosers are taking and the ones that use 1000ug occassionally (ie. after smart people's 2.5 year stint at Google are they all on the path to a mental hospital?)

So many questions.

Where is the medical journal that says your conclusion "vast majority of mental health patients have a self-reported history of drug use of these specific drugs". I guess it can't exist because its crazy people self reporting a variety of substances that even the user would have no idea what was actually given to them.


If I buy a stock, does the price at which I agreed to buy it become the new share price on the stock exchange?

Every article on "Where do stock prices come from?" seems to just talk at a high level about supply and demand.

But where does the price come from at a nitty-gritty level? Is it an average of all existing offers or something?

Do different exchanges and stock-ticker websites have different formula for calculating share price?

If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?


If you see a single share price listed, that is the price of the last sale that occurred.

Now that can either mean that someone bought a share that someone else was selling or that someone was selling a share to someone who was offering to buy.

The shares are listed as a series of buy and sell orders in what's called an order book.

If the price a share was sold at was 100$ and you think it will go a bit lower, you could place a buy order at 90$. Should enough people sell shares to reach your price and order, your order will be filled and you will own the share at 90$.

If someone wants a million shares at 91$, you may not get your single share at 90$.

To go back to your example, if you were to place a buy order at 100$ for a 4$ priced share, how much the price moves depends on how many sell orders are in place from 4$ to 100$ and how much you are buying.

If it's only one for share, your order will probably get filled at something like 4.01$ if the spread is low (the spread being the difference between the highest buy order and the lowest sell order).

If you're buying 1000 shares and it's a low volume stock with a "thin" order book, maybe it could go up a few dollars instead but for it to go up to 100$ you have to buy every single share between 4$ and 100$.


How does it work with futures?

My understanding is that I buy a contract for 100 shares at a future date of my choice.

So let's say stock XYZ is currently trading at $20, and I buy a futures contract for $100 for Jan 1st 2021.

I don't have enough money sitting around to buy those 100 shares, so let's say XYZ is trading at $200 by Jan 1st 2021, That means I have a contract where I can buy 100 of them at $100 and immediately sell them for $200, so in theory it's a $10k profit.... but because I don't have enough money to actually do that, do I just sell my future for something close to 100* $200 (because someone with enough money will buy it and do the actual trade?)

What happens if it's already well over $100 long before Jan 1? Can I just set a price and sell it whenever I want like a can with a regular share?


What you're describing is more like options, but futures and options both work in a similar way to what you described, but futures have an underlying commodity that you have an obligation to take possession of (such as barrels of oil) options on the other hand give you the right to buy the underlying stock, but you have no obligation to do so.

So if I have a futures contract for 1000 barrels of oil for $10 a piece, when that expires, no matter what the price of oil, $10000 will be taken out of my account and someone will contact me to come pick up those barrels (there's some nuance here, but let's ignore that)(funny story about this at the bottom). If I have an 10 options contracts for 1000 shares of stock XYZ at $10 a share and XYZ is at $9 a share, I can just let the contract expire worthless.

From here on out, I'm going to talk about options, because it's closer to what you are asking.

> I don't have enough money sitting around to buy those 100 shares, so let's say XYZ is trading at $200 by Jan 1st 2021, That means I have a contract where I can buy 100 of them at $100 and immediately sell them for $200, so in theory it's a $10k profit.... but because I don't have enough money to actually do that, do I just sell my future for something close to 100* $200 (because someone with enough money will buy it and do the actual trade?)

Contracts are exercised after hours. If you don't have the money you will usually collect the shares and be put in a margin call (you owe your broker money) and then the shares will be sold first thing the next day to cover the margin call. Some brokers will try and sell your contract for you before the end of the day the contracts expire if you don't have the money. So you may only get $9950 or something for your contract and it will be sold to someone else a few hours before the market closes.

> What happens if it's already well over $100 long before Jan 1? Can I just set a price and sell it whenever I want like a can with a regular share?

Yes, the contract itself cost money and can be bought or sold.

This isn't a perfect analogy, but think of the contract like a coupon. If I have a coupon for to buy a TV for $100 and the cheapest anywhere is selling that TV is for $200. That coupon has an intrinsic value of $100.

The analogy breaks down, because in the real world, you probably couldn't get $100 for that coupon. You'd probably get slightly less than the price difference. With options on the other hand, you usually pay slightly more than the price difference, because of the volatility of the underlying. Basically, someone might pay you $105 for that coupon because they think they the TV price will go up and they can sell it to someone next week for $110.

========

* As a side note, when people were saying oil prices went negative last week, it's because the May futures contracts were about to expire. Basically, it was the second to last day people could trade the contracts before they had to take delivery of the oil and, since so few people are using oil right now, many of the places where people would normally store oil are full. Since you can't just dump oil down the drain, people holding oil contracts were willing to pay other people to take over the contracts so they didn't have to take delivery of the oil.


Thank you very much for this detailed reply.

and yes, I absolutely was meaning to ask about options, not futures (I know so little, I used the wrong word!)

How do options get priced?

i.e. if I look in my online brokerage thing I can buy XYZ for $100 on Jan 1 2021, but what if I want to buy it for $125 on Jan 1 2025?

Who decides what dates and prices are set?

Can I just go insane and get options on AAPL for $3000 (or $2) in 2030?


Anyone selling an option determines the price (same as a stock). You can sell an option yourself for any price if you so desire. If there is an influx of buyers, the will purchase the cheapest first, but as they do, those that remain available are at a higher price. This is why people say that demand increase price.


What happens if the stock sees almost no volume of trades, and we have a situation where the lowest sell is at $90 and highest buy at $100 (i.e. both parties are overgenerous)? Does the stock exchange make them split the difference, i.e. makes the trade at $95?


If someone wants to buy at $100 and someone else wants to sell at $90 it depends on who came first. To really answer this question, you need to understand the difference between market maker and taker and you need to understand how the limit order book [1] works.

Assume there are no other orders in the order book.

Scenario 1: Seller submits a limit sell order for $90. Since there are no buyers, this order goes into the book. Then a buyer submits a limit buy order for $100. The order would be filled at $90 (the best ask) and the buyer only pays $90. Here, the seller is the maker and the buyer is the taker.

Scenario 2: Buyer submits a limit buy order for $100. Since there are no sellers, this order goes into the book. Then a seller submits a limit sell order for $90. The order will be filled at $100 (the best bid) and the seller gets $100. Here, the buyer is the maker and the seller is the taker.

Market makers are responsible for setting prices and providing liquidity. If you want to understand this in more detail, check out this post [1] I wrote up a while ago.

[1] https://www.tradientblog.com/2020/03/understanding-the-limit...


I assume you mean highest buy (bid) is $90 and lowest sell (ask) is $100. In this case there would be no trade. This is also called the spread.

If you actually meant your original wording, theoretically whichever order came second would fill at the best price.


To understand this, you should start by understanding the limit order book, which will give you a concrete model with which you can visualize the supply and demand.

In simple terms, when you want to buy 100 shares of a stock at no more than $4, you place a limit order into the exchange’s order book for that stock. Other buyers do the same, as do sellers. The order book is a sorted structure with the orders and their sizes on each side. It may look like this:

Sell 100 @ $7

Sell 300 @ $6

Sell 300 @ $5

Buy 100 @ $4 (your order)

Buy 200 @ $3

Buy 100 @ $2

Notice the gap between the highest buy (“bid”) and the lowest sell (“offer“ or “ask”). This is called the ”bid/ask spread.” Whether we’re talking stocks or eBay or a local outdoor market, buyers always want to pay less, sellers always want to earn more, and there is always a bid/ask spread.

If instead of sticking to your $4 limit, you said “forget it, I just want the stock” you would enter a market order instead of a limit order. In doing so you’d “cross the spread” and pay $5 per share. For a trade to happen, someone has to cross the spread.

If you entered a buy order with a limit of $100 in this example, you’d still buy at $5. If you ordered 400 shares at $100, you’d buy 300 at $5 and 100 at $6. The $5 offer would come out of the order book and the $6 offer would be reduced in size.

When you think of the market as all of this upward and downward price pressure focusing around a spread, you can see that the price the market values the stock at is conceptually the midpoint between the highest bid and the lowest offer, also known as the “mid.”

As prices change, the spread’s price level moves up and down, it narrows and widens, but the price you see always at least indirectly reflects that midpoint of price interest between all buyers and all sellers. There will always be intricacies in price reporting (based on the price feed, the price you see is the last trade made, the mid, or something more complex), but if you understand the order book, you’ll have the basic idea and can build from there.

If you’re really interested, you can google how and when the various exchanges calculate and report their prices, who they make them available to directly, what vendors provide raw and aggregate views of those prices, and more. There are many flavors varying from real-time tick-by-tick reporting to end of day feeds and more.

All of them ultimately begin with what you can now visualize as an order book.


The price is generally the last price at which a transaction happened, modulo people intending to cheat the system.

There are people who put limit orders on the exchanges. Say that the price of TSLA is $500. I think it's overpriced, and its likely to go down, but then grow in the future. I can say, "I'm willing to buy 100 shares of TSLA at $420." Someone else holds TSLA and thinks its likely to go up, but not hold it's value, so they say, "I'm willing to sell 100 shares of TSLA at $690." The sum of all of these limit orders forms the market depth chart.

The more common way to interact with the market is to say, "I want to buy a share of TSLA at the current market price." In the above example, the only option is to buy TSLA for $690, even though the last transaction was $500! This is a example with very little market depth. In the normal case, you'd buy your share for $500.02 or something like that. (Same, but reversed, for selling at market price.)

For more information, but with a crypto focus, see https://hackernoon.com/depth-chart-and-its-significance-in-t...

For your example, you would put in a market order, and buy the stock at the lowest price that someone was willing to sell it at. If the last price was $4, but the lowest limit order that currently existed was for $100, and you bought it for $100, then yes, the price would go up to $100. (In real life, those sharp upticks don't happen much. It's more likely that a sharp downtick happens, where suddenly everyone wants to sell oil futures at the same time, but almost no one is willing to buy them, so the price ends up negative.)

Note that whenever people defend high-frequency trading for "providing liquidity to the market," this action of setting buy and sell limit orders that are close to each other is what they are talking about. There are algorithms that will see TSLA at $500, and offer to sell TSLA at $500.02 and buy TSLA at 499.98. If both orders go through, they make $0.04. If you operate fast enough to get out ahead of any big market moves, you can make a lot of money. But if you ever accidentally buy a bunch of TSLA for $499.98 right before the price plummets to $420, then you just lost a lot of money. This is why HFT and other trades with similar risk profiles are sometimes referred to as "picking up nickels in front of a steamroller."


The short answer: What exactly the "price" shown on the exchange website is depends on the exchange. Typically it's the last trade price or the mid price (average of best bid and offer). There really is no such thing as a single "price" because the price will depend on the quantity and direction you are transacting in.

Long answer: You need to understand how the Limit Order Book works. I wrote up something about this here [1]. It also goes into different definitions of price.

> If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?

If you trade actually absorbs the order book and pushes the asks to $100 then yes, that could be case depending on the exchange, but I'm not sure about NYSE specifically. Most likely that could never happen due to various hidden order types and HFT market makers though.

[1] https://www.tradientblog.com/2020/03/understanding-the-limit...


Thank you! So each exchange has its own formula for determining the price that gets shown on the ticker?

Let's say someone owns shares in a very low-volume stock — one that gets a couple trades a day, at most. Could they artificially increase the share price by offering their shares at a high price, then using a second account under their control to immediately buy them at the inflated price?


Yes, this is called cross-trading [0] and AFAIK is forbidden by the SEC.

[0] https://www.risk.net/definition/cross-trade


I think the other answers have been good at explaining HOW stock prices are determined at the exchange, but if you're wondering WHY someone would say "That stock is worth $xx.xx", then that's a more complicated question, with equally complicate answers.

At the very lowest level, it could be gut feelings from a potential buyer. They see electric cars more frequently, combustion engines going out of fashion, and simply wonder "Hey, why does that [electric car company] trade so low, when they'll probably be market leaders in 5/10/15 years?", or conversely, "Hey, why does that [petroleum] company trade so high, oil prices are shot, and the industry will lose relevance in 10/20/30 years"

On a higher level, some potential buyer will look at the companies financial statements, and figure out if the share price is too high / low for how the company is performing, from a financial standpoint. This is called "fundamental analysis", and you can easily find step-by-step analysis reports of such on various companies.

But the market is one big hodgepodge of beliefs, with probably thousands of different rationales behind their prices, and motives for sales / purchases.


> On a higher level, some potential buyer will look at the companies financial statements, and figure out if the share price is too high / low for how the company is performing, from a financial standpoint. This is called "fundamental analysis", and you can easily find step-by-step analysis reports of such on various companies.

How does this work if a company doesn't pay out dividends? There's no investment to return unless someone buys from you at the same or higher price... right?


There are multiple ways to do this analysis, and not all of them come from evaluating them as investments. Stocks represent an ownership stake in the firm, and you can use any method you want to give a number to what the firm's "value" is. This could be by reading financial statements, comparing it to other existing companies, seeing how much their operating cash flow is and so on.


Disclaimer: I used to work in HFT

Each exchange is basically its own world, with the exception of Reg NMS, which I'll get to in a sec.

Let's task about order books. Each stock has its own order book. This might be an example of the book for AAPL:

* SELLING 100 shares @ $10.02 * SELLING 200 shares @ $10.01 * SELLING 100 shares @ $10.00 * BUYING 100 shares @ $9.99 * BUYING 200 shares @ $9.98 * BUYING 100 shares @ $9.97

So if you want to buy some AAPL, you will want to go grab the cheapest shares you can see, which here is the fellow selling 100 shares at $10.00. You submit a limit order to buy at 10.00 and are matched with that guy. The book now looks like this:

* SELLING 100 shares @ $10.02 * SELLING 200 shares @ $10.01 * BUYING 100 shares @ $9.99 * BUYING 200 shares @ $9.98 * BUYING 100 shares @ $9.97

Now let's suppose a market maker decides they think the price is going to follow, so they go and fill in the hole by submitting an order to BUY 100 shares at 10.00. There's no more shares to buy at $10.00, so their order rests on the book.

Now we have:

* SELLING 100 shares @ $10.02 * SELLING 200 shares @ $10.01 * SELLING 100 shares @ $10.00 * BUYING 100 shares @ $10.00 * BUYING 100 shares @ $9.99 * BUYING 200 shares @ $9.98 * BUYING 100 shares @ $9.97

Now that we've played out this scenario, let's go back to your original question. What is the price of AAPL at any point in here? Well, it depends. At the start, if you wanted to buy, you could say the price is $10.00. But if you wanted to sell, the best you'd get is 9.99. So, hard to say.

It's worth noting that the prices you see in the book are only there because people aren't agreeing on the prices. If they did agree, a trade would happen, and the prices wouldn't be on the book. So, with that in mind, you could say that really, the price of a stock is the last price people agreed at: the last trade price. That's better, we're at least down to just one price to think about.

That could be quite different from what the best bid/offer are right now, though (some stocks don't trade very often) so even if (let's say) you last saw AAPL trade at 9.50 before our example, obviously that price is long gone. So even the last trade price is potentially not "the price of the stock".

So, in short, there's really no such thing as "the price of a stock". It'll all depend on how sophisticated you want to be about the price at which you buy your shares.

When people talk generally about the price of a stock, it's usually just up to whatever site people are looking at, and usually markets are liquid enough and trade enough that all the kinds of prices we just talked about are usually only a penny different, so when people are just at the watercooler saying "Did you see the price of AAPL?" they don't care about the pennies, and by the time they've managed to say that question, the price has moved anyway, probably lots of times. So it all gets a little hand-wave-y.

I want to mention two other things that might interest you. Reg NMS is what ties all the exchanges together, so to speak. Let's say you want to buy AAPL and NYSE has shares selling at $10.00 each, but NASDAQ has them for $9.99 each. It's actually illegal (against Reg NMS) to trade with that guy at $10.00 at NYSE because NASDAQ has the "NBBO" (national best bid/offer) right now. Extra caveat: if you sent a special order to NYSE that says "I promise you, I've also sent an order to NASDAQ to buy the shares for $9.99 and I've determined you're the next best price at $10.00, let me buy them", it'll let you. It's called an ISO (Intermarket Sweep Order) and if you lie about them or mistakenly lie about them, you get fined. A lot.

The other interesting thing: Your last question was "If a very low-volume stock is listed at $4, and then I offer to buy a share for $100, does the NYSE suddenly start listing its price at $100?" There's actually a lot to unpack here. Let's go through it.

If you're a registered broker-dealer and are connected directly to NYSE, and you send a limit order for XYZ @ $100/share, what's going to happen is you're going to get "price improvement" and you'll end up getting the shares at $4. If you send an order for LOTS of shares at $100, you'll clear out a bunch of price levels in one go. Ex:

Let's say this is the book for XYZ:

* [...] * SELLING 200 shares @ $110.00 * SELLING 1000 shares @ $5.00 * SELLING 200 shares @ $4.02 * SELLING 100 shares @ $4.01 * SELLING 500 shares @ $4.00 * BUYING 100 shares @ $3.99 * [...]

Usually when you get away from the middle of the book, liquidity dries up fast and the prices get further apart. So let's say you send an order for 10000 shares at $100. You're going to get 500 at $4, 100 at $4.01, 200 at $4.02, 1000 at $5.00. Now the next price is 110, but your limit price is 100. So your order will now actually rest partially-filled on the book. So now this is the book:

* [...] * SELLING 200 shares @ $110.00 * BUYING 8200 shares @ $100.00 * BUYING 100 shares @ $3.99 * [...]

Neat, huh? That was a lot of price movement. So yes, if you can send for enough shares and are willing to pay through a lot of price levels, you can move the price of the stock. Remember Reg NMS though - if more stock exchanges existed in our example, you'd also likely need to go get shares at them if they have a better price than the exchange you just moved the price at.

But let's now suppose you're NOT a registered broker-dealer, but are instead Joe A. Schmoe, a client of Charles Schwab Brokerage. You enter your order in your web browser and hit trade. Schwab has a legal obligation to fill your order, if possible, only at the NBBO. They could route your order right to an exchange, but instead, they will send your order to their friend, Citadel, who will have the opportunity to trade against your flow before it gets routed to the stock exchanges. Generally, this is good for you: they might decide your order represents good information and they want your shares. They could decide to fill your order themselves and sell you all 10000 shares you want. They're constrained by the NBBO though, so you get all 10000 shares at $4. For being the source of this order, Citadel pays Schwab some money. Usually practically a pittance, pennies, if that. Order flow is dirt cheap nowadays.

This is called "selling order flow" and lots of people find it scary, because it's not really super intuitive why someone would want to buy or sell the actual flow of orders. But it's actually pretty boring and more about high-level statistics than anything actually interesting to Joe Schmoe, who would get bored when he realized he's not really getting ripped off.

Sorry, I got a bit off-topic. But I love finance, so please forgive me.


> Sorry, I got a bit off-topic. But I love finance, so please forgive me.

You really didn't have to apologize! I thoroughly enjoyed your explanation. I can't recall how many times I've searched Google for "How are stock prices determined?" and come back with nothing. Your answer was better than 100% of everything else out there.

I'd love to learn more about this. Are there any books, blogs, etc. that you could recommend? Also, YOU should really consider blogging about this!


In your example, after the market maker posts the BUY 10 @ 100.0, your book is:

    SELLING 100 shares @ $10.02 
    SELLING 200 shares @ $10.01
    SELLING 100 shares @ $10.00 <--
    BUYING 100 shares @ $10.00  <--
    BUYING 100 shares @ $9.99 
    BUYING 200 shares @ $9.98 
    BUYING 100 shares @ $9.97
Wouldn't the SELL and the BUY @ 10.00 get matched immediately in this case?


Love the detailed explanation. Thanks. Once I saw Boeing at $120. I thought to myself. That’s dirt cheap and should buy. So I hit buy at market and lo behold I bought for $130. Wait! Whaaa! Did the exchange lie to me ? That day I learned a very hard lesson that there are infact two prices. Ask price and bid price. I wasn’t paying attention.

Now I try to always put limit orders. I put sell for Boeing at $150 with “good till cancelled” option. One morning I wake up to see they’ve been fulfilled. Wohoo! But the price had dropped down to $140. So I cashed in on the spike.

The market is crazy. I still don’t understand if. P/E ratios for some companies are through the roof (100+), why are people still investing in them like crazy? We don’t have a cov2 vaccine, millions of people don’t have jobs, why did the marker recover half it’s losses already? Shopify, Amzn, Zoom. WTF! Their charts seem hyped. Or may be I’m just plain wrong and don’t understand the fundamentals.


> Shopify, Amzn, Zoom. WTF! Their charts seem hyped. Or may be I’m just plain wrong and don’t understand the fundamentals.

Since the outbreak of COVID-19, demand for the kind of services offered by those 3 Internet businesses have in fact skyrocketed. Increasing demand imply those businesses still have room to grow revenue. Shopify [1] for instance is now seeing huge Black Friday-like traffic during the shelter-in-place and a lot of these small businesses are first-timers on their platform who will likely stick around after the pandemic.

1: https://mobile.twitter.com/jmwind/status/1250816681024331777


Also note that Robinhood (and possibly other apps) sell your trading data to HFT firms (in real-time), so market trades are not going to be in your favor


It’s not dissimilar to a commodity like Gold with some extra parameters like number of stocks. Comes down to an open market and buy/sell demand. There are other parts to your example such as depth of order book (orders on the buy sell side) which provide liquidity. So if a low volume stock is at $4 and you offer to buy at $100 you would probably end up with 100/4=25 shares. If was more illiquid you could possibly move the price up depending on order quantity.


I think a lot of questions that people have about economics/finance/money are about the nitty-gritty mechanics, and a lot of answers are instead about the big picture / general laws. That's what leads people to say they can't understand economics.

(Also it's strange that sometimes there is disagreement about the mechanics between actual practitioners, see the recent confusion about whether fractional reserve banking is true.)


This book by Larry Harris helped me understand much better the mechanics and terminology of financial markets.

https://www.amazon.com/Trading-Exchanges-Market-Microstructu...


That's not science.


Quantum spin. Electrons aren't really spinning, right? But why do we call it spin? I know it has something to do with angular momentum. What are the possible values? Is it a magnitude or a vector? Is there a reason we call it "spin" instead of "taste" or some other arbitrary name? How do you change it? What happens to it when particles interact?


If you take some ball of charge and actually spin it, and then place in inside a magnetic field, it moves in a certain way. If you take a single electron and place it in a magnetic field, it moves in that same way as the ball of charge. Ergo, its natural to call the relevant intrinsic property of electrons "spin".


> Electrons aren't really spinning, right?

Correct.

> But why do we call it spin?

Because it is a physical quantity whose units are those of angular momentum, and we have to call it something.

> What are the possible values?

+/- h/4pi where h is Planck's constant. (It is usually written has h-bar/2 where h-bar is h/2pi.)

> Is it a magnitude or a vector?

It's a vector that always points in a direction corresponding to the orientation of the apparatus you use to measure it.

> Is there a reason we call it "spin" instead of "taste" or some other arbitrary name?

Yes. See above.

> How do you change it?

You can change an electron spin by measuring it along a different axis than the last time you measured it. The result you get will be one of two possible values. You can't control which one you get.

> What happens to it when particles interact?

Their spins become entangled.


> It's a vector

It's not exactly a vector...


> It's not exactly a vector...

That's right. It's not a vector because it doesn't "transform" like a vector.

If you take a vector and rotate about an axis by 360 degrees, you get the same vector.

If you take a spinor and rotate it by 360 degrees you get a spinor which is "flipped". You have to rotate the spinor by 720 degrees to get back to the same spinor.

This is intrinsically weird, but that's QM.


It only has direction + magnitude right? (±h/4π)e_i for some unit vector e_i.

So it can be written as a vector? No?


[flagged]


> madhadron doens't know what he's talking about.

Not sure why you're saying this when you end up saying pretty much what madhadron said, i.e.: "To be excruciatingly precise, it's a ray in Hilbert space"

I get that madhadron doesn't explain themselves but still... There's some irony here.


Because the difference between a ray and a vector is so minor that the words are effectively interchangeable, and in actual practice the word "vector" is invariably used. No one ever talks about the "quantum state ray." It is always the "quantum state vector."

Also, simply saying "no it isn't" with no further explanation in response to any comment just obnoxious. It's essentially saying, "You're wrong, and I know why you're wrong, but I'm not going to tell you." It's not constructive, and anyone who does it deserves to be smacked down hard, even if there might be a tiny nugget of truth hiding underneath their oblique self-aggrandizement.


> anyone who does it deserves to be smacked down hard

I was trying to hint that there was something you were overlooking in your explanation. Probably not an effective tactic in this medium.

But thanks to your fine example, I will in future make sure to lay out things in exquisite detail while belittling the person I am replying to.


No, I hadn't overlooked it. I was providing an answer appropriate to the audience and the context. And that answer was actually correct. Not only was your answer not constructive, it was also flat-out wrong.


I actually do. It's a spinor. It doesn't transform the way a vector does.


A spinor is a vector. It's not a vector in 3-D space, but that's true of all quantum state vectors, not just spinors.


The way the OP was using vector implied a 3D Gibbs-Heaviside vector, not an element of an arbitrary vector space.


Hogwash. Here is the full context:

> Is it a magnitude or a vector?

Of those choices, vector is clearly the more correct one.

Here is what a constructive response would have looked like:

"Lisper is correct when he says it's a vector and not a "magnitude" (the more common term is "scalar"), but there is an important subtlety: it is not a vector in 3-D space. Instead it is something called a spinor (https://en.wikipedia.org/wiki/Spinor) which is a kind of vector, but behaves differently than vectors in 3-D space in some important ways. For example, if you rotate a spinor 360 degrees you do not get back the same spinor you started with. You have to rotate a spinor 720 degrees to get back to where you started.

And in fact if you really want to get into the weeds, in the mathematical formalism of QM these are actually things called "rays" in something called a "Hilbert space" but no one cares about that, not even physicists, which is why all physicists refer to these things as "state vectors" rather than "state rays" even though there are some formal differences between vectors and rays. But only people who want to publicly exhibit their superior knowledge (as opposed to engaging in effective pedagogy) would ever bring up such trivial details."


How? Take people's word. If they say vector, assume they just mean some element of a vector space. No need to be rude to somebody based on an assumption.


Any textbook exposing this mathematical formalization with rays? I'm interested by the mathematical aspect as well.


Sure, any intro QM text will cover this. Griffiths is the canonical one.


No, they are not really spinning. However the spin quantum property does make the particle deflect as if it were spinning when it moves through a magnetic field, thus the name.

It is techinically a two-component spinor, which is why the direction of the spin 'moves' if you measure it along different x,y,z axes. It is also quantized unlike a normal vector: All fermions have quantized half-integer spin magnitudes and all bosons have integer magnitudes.

Magnetic fields can be used to change the spin.

When particles interact, opposing spins tend to pair up in each electron orbital which cancels the magnetic field. This is why permanent magnets must have unpaired electron orbitals.


Can you please formally define what a spinor is? Like, mathematically, what are they?


A quaternion is a (more-commonly known?) type of spinor which is often used in 3d graphics in matrix form to perform rotations.

Spinors are difficult to describe in an HN post since they require a good amount of linear algebra, but my favorite explanation is probably here: http://www.weylmann.com/spinor.pdf


Annoyingly I've always struggled with the precise definition of spin, but to my understanding angular momentum is only conserved if you take spin into account as well. Being quantum mechanical there is some ambiguity in the direction, but in principle it is a vector (except it's a superposition of several vectors, and this superposition is also equal to superpositions of different vectors, I think there is usually one 'pure' vector, but that might just be after measurement).


> angular momentum is only conserved if you take spin into account as well.

Do you know an example of a process that moves angular momentum from one kind of spin to the other?


There's an experiment that transfers that angular momentum all the way up to macroscopic levels. By magnetizing a cylinder of iron, all the spins start pointing in the same direction. By conservation of angular momentum, the cylinder itself has to start spinning in the opposite direction. I'm very fond of this experiment, because it magnifies a strange quantum phenomenon to the classical level.

Spin being an intrinsically quantum mechanical concept, I'm afraid the microscopic mechanism by which that transfer occurs will only be explainable in a quantum mechanical context. Here it will appear as a term in the Hamiltonian coupling the spin of an electron to its motion in a potential.

https://en.m.wikipedia.org/wiki/Einstein%E2%80%93de_Haas_eff...


Thinking of it as different 'kinds' of spins isn't quite right.

It's more akin to the direction or axis of the spin being changed, and simply measuring the spin along a certain access will change it: https://en.wikipedia.org/wiki/Spin_(physics)#Measurement_of_...


The complex exponential term in the equation of the electron is called Zitterbewegung, and it may be interpreted as a spinning of the electron at speed of light in the complex space.


I'm an amateur, but I have been investigating this for myself for a long time and mean to write up a blog post when I have it all settled to my satisfaction. That's not ready yet, but I think I can give you an explanation that is more acceptable than the usual ones (including the other comments here: because it appears to have angular momentum, because it appears to have a two-component complex number that is affected by magnetic fields, etc).

Spin is not an object really 'spinning', but in fact, neither is angular momentum, and momentum isn't really an object 'moving'. Let's be clear about what momentum is in quantum mechanics: there is, say, an electron field, and along a particular path it oscillates as `e^i(Et-px)/hbar`. The momentum determines how frequently the wave oscillates as you move in space; the energy determines how much it oscillates in time (the E and p operators pull E and p scalars out of this, and the Schrodinger equation says that E^2 - p^2 = m^2 (sorta; we take the positive solution of a low-momentum expansion...).

Anyway, the point is, momentum means "the wave function oscillates as you change the position'. Angular momentum means "the wave function oscillates as you change the angle'. The base orbital angular momentum state looks like `e^i m φ`, and the operator that extracts `m` is `- i d_φ`. Etc.

The other thing about wave functions is that they are continuous everywhere. The presence of a particle is something like "having a knot" in the field -- there is a region where there must be some net object, because if you integrate around the region you see non-zero net flux. That kind of thing. So to have "intrinsic angular momentum of 1/2" means that, if you integrate around a region where a fermion is, you'll see a net rotation of the wave function by half a phase.

Now, that seems nonsensical, because if you integrate around a point, you should get back to the value you started at. And in fact, you do, but the two are distinguishable: the reconciliation for this is related to the fact that SO(3) is not simply-connected; if you produce a path of rotations that takes every vector back to where it started (such as XYZXYZ, where X rotates around the X axis), any path that performs one loop does not deform the identity path, but one that does two does -- which makes these states physically different. So if you gave me a wave function, I could bucketize all of the points in, say, the 'z' direction into ones that are in the 'identity' element (relative to a point of my choosing) vs the 'anti-identity' element. These have opposite spins.

I am still working on tightening up the model, and I haven't quite figured out how this causes the magnetic field to send such a particle in a different direction. But the rest feels close, to me, and doesn't rely on any hand-waving statements like "because it seems to work this way".


Interesting you mention "knot". What do you think of an utterly unfounded intuition that elementary particles are "knots", actually topological ones, on fields?


I think that's widely understood to be 'sort of true'. Not necessarily a literal knot, but a geometric property - like what I mentioned, where integrating around sees a net divergence/curl/etc in any reference frame. That's the reason that QM doesn't predict exact locations for particles: the discontinuity doesn't have a location, it just exists in a region.

(I don't exactly understand if this 'is' a knot, in a sense. I guess it is.)


Electrons have both angular momentum and magnetic momentum, these have fixed magnitudes and they are always parallel.

The situation is somewhat similar to a classical spinning charged sphare, although this similarity easily breaks down.


Law. How much of it rests on technicalities, and how much do judges care about the essence of the facts of a case? You only hear about the weird outcomes on the news. For example, I was once working with an attorney because my landlord didn’t supply heat in the apartment. I started keeping a temperature log, but how would a judge know that the log was accurate? Do I need to prove that my thermometer is accurate... etc. In practice, no, but how can I better reason about what is “likely” vs not?


I've begun to grok so much about how law _practically_ works by listening to Popehat's All the President's Lawyers podcast.

Basically you get an educated layperson asking a veteran criminal lawyer questions, usually around the First Amendment, always related to current events. The lawyer (Ken White) explains in practical terms what is likely to happen and why.


IANAL, but if your log was introduced in evidence in your suit, you would be attesting under oath it was correct to the best of your knowledge. If it later turned out you faked the data, that would be perjury. So it's the threat of punishment for perjury that motivates honesty in court.


I am a lawyer. In litigation there are two considerations: law and facts. The judge determines questions of law, the jury (or judge in bench trials) determines the facts (often termed the "trier of fact").

How does that work practically? A very simple example:

Let's say someone is sued for the tort of battery. It's the first tort you learn in law school. The mnemonic we learn is "IHOC" which stands for intentional harmful or offensive contact. So we now have what we call the "elements" of the tort, which must be proven individually.

1. Intentional 2. Harmful or offensive 3. Contact

This is where the question of law comes in. What does "intentional" mean? Did I intend to take the action, or did I intend to cause harm? That's a question of law that courts have determined to be that you intended harm. Did I actually have to know the harm that would result? Or maybe I just need to know that it's likely to cause some harm. Questions of law. Another potential question of law -- what does "harm" or "offensive" mean? What if the contact is with my backpack and not my body? These are issues that judges decide as questions of law.

Once we decide on the correct legal standards to apply (which is where lawyers do their most arguing and briefing), then the "trier of fact," generally a jury in the US, determines how the facts of the case fit the law. Did the defendant "intend" the harm? Did the defendant engage in "harmful or offensive" contact? Did the defendant "contact" the plaintiff? The jury decides those things based on a preponderance of the evidence in a civil case, meaning more likely than not.

In your case, you have an issue of evidence. When someone testifies, both sides get to question the witness so that the trier of fact can judge the issue and make determinations of credibility. You can testify that you kept a lot of temperatures, and you'll explain how you did it and that you have personal knowledge of what you wrote. Then the other side gets to try and poke holes in your log. Did you actually do it contemporaneous to your observations? Did you write it down the next day? How do you know your thermometer is accurate? Did you open the window just before taking the temperature? Maybe you have a history of lying to people. Then the jury gets to take your testimony in consideration. They can do with it what they like.

I hope that helps!


You may be interested in the idea of legal realism [0]

[0] https://en.wikipedia.org/wiki/Legal_realism


I consider that there are 4 levels of scientific writing.

1) news articles/lay press - basically terrible and typically get things wrong

2) scientific lay press (scientific american, discover, science news) - get things right, but generally no data/citations or nuance

3) journal summaries - get things right, citations and data for everything. Good summary of the latest scientific thought on a topic. Tend to push a point of view, which generally will be right, but that educated people can debate. Dont always show the data, but at least refer to it. These help you to get up to speed with the primary experiments that were used to establish current thinking.

4) first source articles - typically make claims too broad for the actual results. But has all data. Often times the claims don't follow from the data at all. Generally have to work in the field to understand strengths and weaknesses of methods and you cant just take the conclusions at face value.

As a PhD student, I used #3 a lot to get centered on a space. To understand 4, I typically had to learn directly from my research advisor or other grad students that specialized in an area.

My point here is that you can find these summary articles in journals (microbiology, immunology,virology etc). They are published infrequently so can be hard to find, but they exist and you should look for them.


I was shoulder deep in primary source material (#4) on real-time fuzzy search and wasn’t able to make heads or tails of where everything fit. Until I found a #3 overview that referenced most of the primary sources and put them in context. That paper was worth its weight in gold!

Do you have any tips for how to quickly find these #3 materials in other spaces?


The name for what you and the parent described is a review paper. I can't speak for Math or CS, but in life sciences the following techniques would work:

1. Search X and sort by citation count. High quality review papers get cited A LOT, typically in introduction sections of primary research papers. Alternatively, google "[X] review" or "best review paper on X".

2. Look for review journals. Many fields will have journals who only publish reviews. Nature has several such publications for example.

3. Look for the top journals in the space (start by sorting by impact factor) and see if they have review sections. If they do, try to search those sections. Most journals will reach out to top labs in a space and request that they write a review on a subject if the journal editors feel one is needed.

4. Ask someone in the field. Any researcher should be able to immediately point you to canonical reviews in their space.


What exactly do you mean by journal summaries?

For something between 2 and 4, the best I can come up with would be textbooks or seminars, both being extremely spotty in terms of quality and understandability.

In any case, a big problem you get is a cliff of information content going from 4 down to whatever the next step is. The incentive structure substantially motivates putting out new material, which must have some novel concepts. The focus on novelty and accomplishment leads to quite a mess. People put out half-baked work to be the first to write on a particular subject, which gets citations, which means the next round, also half-baked, is built on a half-baked foundation. When what's most needed in almost all cases is to parse the last generation of literature into something coherent, real, and replicable.


I suppose OP meant "review" or "survey" articles.If that was the case I totally agree.

For the other poster one nice source is of course The Annual Review journals. Arxiv of course too. The bibliographies in undergraduate/beginning graduate textbooks or syllabi are good sources too.


Subreddit /r/askscience does a good job at explaining science in plain words. I usually google "site:reddit.com/r/askscience/ __QUESTION__".

The StackExchange sites have less coverage and answers tend to be more technical.

University websites return reliable answers, but often neither short nor accessible.


Bell's theorem. It somehow proves that quantum physics is incompatible with local hidden variables, but I could never see an understandable explanation (for me at least) of just how it works.


Yudkowsky's explanation[1] is the first one that worked for me. I later found Quantum mysteries for anyone[2] helpful. The latter has less soap-boxing.

1: https://www.lesswrong.com/posts/AnHJX42C6r6deohTG/bell-s-the...

2: https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...


I trust Yudkowsky on many things, but not on that explanation. It's still quite complicated, and a couple of times I miserably failed to reconstruct it over a beer or two. A red flag.

Plus, I'd rather expect at least one professional (QED) physicist exists able to explain it and he isn't one. Mermin is, but the explanation is decidedly less clear.

BTW I came here to say Bell's inequality as well. For me it's as baffling as science could ever be.


I started to read the first one but his insistence that Many Worlds is true was too frustrating. Many Worlds Theorem seems specifically useful at saying "the variables aren't hidden because everything before wavefunction collapses actually plays out in different worlds.

But, we specifically have no way of proving that theory. So now we're back to the essence of the original question - if these things seem random why do we know that they're in fact deterministic without any hidden variables?


Well, I'd recommend to read the whole series. It's not so bad as it sounds. There are so many steps from where you are to appreciating the utter weirdness of Bell's experimental result. Not the weirdness of any theory (or an interpretation, which Many Worlds actually is) but of the basic experimental result.

If you are properly amazed by it, rejecting MWI or any crazy-ish borderline-conspiracy theory seems suddenly a lot harder.

I feel the whole Yudkowsky's QM series in fact served to deliver that one post.


Why isn't the MWI another form of hidden variables (a supremely non-parsimonious one at that), where the hidden variable is which of the many worlds you happen to inhabit?


I think you can make an argument for viewing it that way, depending on exactly what you mean by "you".

But IIUC, one of the remarkable things about MWI is that it would be a local hidden variable theory!

This is a very important property to have because the principle of locality is deeply ingrained in the way the Universe behaves. Note that (almost?) no other quantum interpretation is both realist and local at the same time.

Maybe you wonder, how is it possible that MWI can be considered a local hidden variable theory if Bell's theorem precisely shows that local hidden variable theories are not possible?

I think that it was Bell himself who said that the theorem is only valid if you assume that there is only one outcome every time you run the experiment, which is not the case in MWI.

This means that MWI is one of the few (the only?) interpretation we have that can explain how we observe Bell's theorem while still being a local, deterministic, realist, hidden variable theory.


For it to be local (causality does not propagate faster than light), it must be superdeterministic (all the many worlds that ever will be, already are). For it not to be superdeterministic (many worlds decohere at the moment of experimentation), it is also not local (the decoherence happens faster than the speed of light, across the universe).


I'm sorry but I don't follow.

If you take the Bell test experiment where Alice and Bob perform their measurements at approximately the same time but very far apart, I think you and I both agree that when Alice does a measurement and observes an outcome, she will have locally decohered from the world where she observes the other outcome.

But I don't see why the decoherence necessarily has to happen faster than the speed of the light.

It makes sense that even if Alice decoheres from the world where she observes the other outcome, the outcomes of Bob's measurement are still in a superposition with respect to each Alice (and vice-versa).

And that only when Alices' and Bobs' light cones intersect each other will the Alices decohere from the Bobs in such a way that the resulting worlds will observe the expected correlations (due to how they were entangled or maybe even due to the worlds interfering with each other when their light cones intersect, like what happens in general with the wave function).

I admit I'm not an expert in this area, but is this not possible?


An awesome question. That is exactly what I have been wondering without being able to put it into words, and this is core of why the MW seems completely uselsess to me as a scientific theory. (As a philosophical one-maybe? But science?)


To be clear, I don't reject Many Worlds at all and in fact consider it a promising candidate due to it sort of "falling out" of the Schrodinger's equations taken literally unless you add complexity.

But the fact remains that it is impossible to prove and it is conveniently well equipped to handle this situation. I'd prefer an argument that presupposes the Copenhagen interpretation as that is when my intuition fails.


>But the fact remains that it is impossible to prove and it is conveniently well equipped to handle this situation. I'd prefer an argument that presupposes the Copenhagen interpretation as that is when my intuition fails.

Is that not like trying to get a better intuition for planetary movement by using an epicycle-based model? The fact that the interpretation is conveniently shaped in a way that a paradox isn't an issue is not a coincidental thing that should be overlooked in the spirit of fairness to alternative interpretations. Regardless, I think my post below is useful for answering your want.

>So now we're back to the essence of the original question - if these things seem random why do we know that they're in fact deterministic without any hidden variables?

The world is only deterministic under Many-Worlds, and it's deterministic in the sense of "each outcome happens (mostly) separately". It doesn't make any sense to try to make sense of the "deterministic" part separately from MWI. MWI is the only deterministic QM theory (unless you're going to consider "superdeterminism", but there's nothing concrete to that interpretation besides "what if there existed a way that we had QM+determinism but not MWI". There's no basis to it, besides a yearning from people that like the abstract idea of determinism and don't like the abstract idea of MWI).

EPR doesn't tell us that the world is deterministic. It tells us that local hidden variable interpretations (where experiments have a single outcome) of QM can't work, because it shows that a measurement on a particle can appear to you to affect the measurement made by someone else on a distant particle. The Copenhagen interpretation response to this is that the wave function collapse must be faster than light. Therefore, the Copenhagen interpretation is not a "local" theory. (The Copenhagen interpretation doesn't give us any answer for who we should expect to trigger this wave function collapse first when two measurements are taken simultaneously at a distance though.)


If experimenters disprove Many Worlds, they've also disproved Copenhagen. These are exactly the same equations after all.

Theoreticians choose very different mindsets about the same equations, which (they say) somehow create them grounds to form various new hypotheses. As far as I know neither approach was very fruitful so far in terms of new science, so people try multitude of others.

What I've meant to say above, I have much trouble using Copenhagen to understand Bell's experiment. MWI fits the bill here for me.


Perhaps you can try the following article, as it was the simplest and clearest explanation I've found anywhere:

https://www.physics.wisc.edu/undergrads/courses/spring2016/4...

AFAICS it was published in the American Journal of Physics in 1981 but it's addressed to the general reader. It requires no knowledge of quantum physics.


Minute Physics and 3brown1blue did a two part collab that simplifies some of it down to a counting problem that'd be suitable for grade school.

https://www.youtube.com/watch?v=zcqZHYo7ONs

https://www.youtube.com/watch?v=MzRCDLre1b4


I have seen the example with the polarized lenses in a few places, but they dont explain why (imo) the simplest explanation does not apply. Namely that the lens itself might disturb the phase of the light, which would then mean it can pass through the next lens.


But if you take away the third lens, there is no light of any polarization. How is it that by adding a filter, you create light where there was none?


Only if you take away the middle lens, not the third.

Here's what I would have thought happens: After the first lens, you get polarized light, 90deg offset from the last lens, so no light passes. Then you introduce a 3rd lens in the middle, 45deg offset. This could alter the polarization (maybe it widens the band, or introduce some greater variance, shifts it who knows), and this is why now some light will pass through number 3. No need to create any light


If it is true that placing the 45 degree lens third or first does not show the same effect, it is much less astonishing.


The idea is that polarization is only one of many places where the effect is observed.


The short answer is this. Suppose thread OP gave you and I a box each. Each box has a some knobs to choose some settings, and if one chooses a setting and presses the GO button, a number pops up on the screen. We go far away from each other so that, due to the finite speed of light, it takes quite a while for any information to travel between the boxes. We do this, because we don't want the boxes to communicate live, even though thread OP might have recorded the same data into the memory drive of both boxes.

Then we choose some settings and press GO and record whatever number pops up. We do this many times so we each have a nice frequency chart. Now Bell proved that if you live in a local hidden variable universe, the correlations between these numbers is upper bounded, no matter how you choose settings on the boxes. Then, he also gave a prescription for choosing the settings, such that if you live a in quantum universe, the correlations between these numbers will be higher than the upper bound.

The rest is mathematics, which cannot really be simplified without leaving the reader unsatisfied.


So what I don't understand is, if both of us take an identical Gemalto token / Yuvikey, we can be light years apart and get the same sequences, no? Is the explanation that these will have one type of distribution vs if they had real "spooky movement at a distance" they have a clear different distribution? EDIT: what about this one: https://arxiv.org/abs/quant-ph/0301059


The distance is part of the proof, but not really part of the mystery.

I'm going to throw out an analogy that gets at what's observed and why it's surprising, but doesn't relate to the physics of spin, momentum, position or anything that's actually under observation in these experiments.

It's as if we have a pair of dice, and I throw my die and you throw your die many times. In a classical world, if I throw a three, it has no influence on what you throw; you're equally likely to throw 1-6. But in the quantum world it's as if when I throw a one, your die still has the expected uniform distribution, but when I happen to throw a three, you're a little bit more likely to throw a three. Your die is fair if I happen to roll a one, but it's weighted if I happen to throw a three.

Back in the real world, this is the strange behavior that is observed in experiment. Schroedinger's equation predicts the probabilities perfectly. But Bell shows that it's far from intuitive.


Here's the thought experiment that sold me on how weird this stuff is.

Imagine explorers on Mars find the ruins of an ancient alien civilization. In those ruins they find several small devices that have three buttons. Beside each button are two colored lights. red and blue. Above the buttons is a display. The linguistics team figured out enough alien writing to tell that the buttons are labeled with the alien's equivalent of A, B, and C, and that the display is a numerical display that goes from 0 to 38413 displayed in base 14 (which fits with other evidence found that the aliens have two hands with 7 fingers).

There is also some kind of docking station, which can hold two of the devices, and has a single button.

If two of the devices are placed in the docking station and its button is pressed, all the lights briefly flash on the devices, and the counter resets to 0. The lights stay on until the device is removed from the dock. Nothing happens if only one device is placed in the dock.

To try to figure out what these devices do, pairs are placed in the dock, reset, and then given to a couple people who go off and press the device buttons are record what happens.

Here is what those people observe.

1. If they press one of the buttons (A, B, or C), exactly one of the two lights next to that button comes on. When the button is released, the light goes out, and the counter goes up by 1, until it reaches 38413. After the next press/release, the counter goes blank and the device is unresponsive until reset again in the dock.

2. As far as anyone can tell, there is no pattern to which light lights. It acts as if pressing a button consults a perfect true unbiased uniformly distributed random bit generator to decide between red and blue.

3. When they compare their results with those of the person who had the box that was their box's dock mate for reset, they find that if on each person's n'th press

-- if they both pressed A, or both pressed B, or both pressed C, they got the same color light.

-- if one of them pressed B, and the other pressed either A or C, they got the same color light 85.36% of the time.

-- if one of them pressed A and the other pressed C, they got the same color light 50% of the time.

4. These results do not depend on the timing between the two people's presses. Those correlations are the same if the people happen to make their n'th press at the same time, or at wildly different times. Even if one person goes through all their presses before the other even starts, their n'th presses exhibit the above correlations.

5. These results do not depend on the distance between the boxes. If a box pair is split up, with one person taking theirs back to Earth while the other remains on Mars, and the two then run through all their presses at nearly the same time, completing quickly enough that there can be no communication between the two boxes during the run due to speed of light limits, they still exhibit the correlations.

Challenge: try to figure out how such boxes could be built without using quantum entanglement. Assume the aliens have nearly unlimited storage technology, so you can include ridiculously large tables if you want, so you can even propose solutions that involve the dock preloading the responses for every possible sequence of presses (all 3^38414 of them). Anything goes as long as it produces the right correlations, and does not involve quantum entanglement.


I recommend this book:

https://www.amazon.com/Quantum-Non-Locality-Relativity-Metap...

If you don't want to read a whole book then I recommend this article:

https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...

but the book will give you a much deeper understanding.


I can recommend Ball's _Beyond Weird_ for the best explanation targeting a lay audience that I've read.


Abiogenesis. I understand that this is not considered understood but I'd be interested in hearing a qualified scientist talk about how we get from "dumb" matter to self-replicating, goal-driven matter. The closest I've ever heard anyone get (in a personal conversation) is that chemistry is about transformation, so "dumb" matter isn't really dumb in the sense that it's static. Still lots of hand waving to get from baking soda and vinegar volcanoes to me typing this question, however.

What's the playing field look like for proto-life? How "smart" are the simplest molecular interactions? What does almost-replication look like? Could we use a computational model for this?

Not sure how much of this is known, but I'd love to hear an expert paint a picture of their mental model of the subject.


This question interests me as well, and I've done some thinking about it over the years. In particular, what I'm interested in is a hypothetically simplest object that can reproduce in a solution of simpler components, along with some differentiating characteristic. For example, maybe you have toroids that pick up particules, grow the torus until its too big, and then splits - and the ends of both halves click together, forming a total of two toroids. Another characteristic that very simple life must have (I believe) is some level of circularity in the sense that the element is an "accumulation of experience" - we might say a reduction of its environment. In the same way Schordinger was interested in life thermodynamically[1] I am interested in speculating about the simplest possible mechanisms in the beginning. (NB I'd expect none of these very simple machines to survive to present day - in fact, I'd imagine there to be several generations of early life, each obliterating/consuming/sublimating the ones before.)

1 - https://en.wikipedia.org/wiki/What_Is_Life%3F


> NB I'd expect none of these very simple machines to survive to present day - in fact, I'd imagine there to be several generations of early life, each obliterating/consuming/sublimating the ones before.

That's another really interesting axis of this topic -- why was early Earth special?

There may be no environments on Earth today that are like early Earth, but we could probably recreate them. Wouldn't we then be able to witness abiogenesis?

Or, early Earth wasn't special and abiogenesis happens today. If so, where do we look?


>Wouldn't we then be able to witness abiogenesis?

It may happen quite infrequently, and only because it happened in a large, lifeless (but 'nutrient' rich bath) did it have a chance to amplify. What's interesting to me is that it only has to happen once.

Yes, Earth's specialness is interesting, too, and counts for what I believe are the best reasons to believe in God. Earth has so many amazing qualities: it is a cozy distance from the Sun (temp), tilted quite a bit (seasons), with a molten core (cosmic ray protection) and a huge moon (tides, nocturnal light). All of these may be necessary conditions for life to arise, and they are all, as far as we know, quite rare individually, and astronomically unlikely in combination.


Did you read Nick Lane's "The Vital Question: Why Is Life The Way It Is?" It's all about this and it's great. Also take a look at https://www.quantamagazine.org/first-support-for-a-physics-t...


On Nick Lane's "The Vital Question":

I gifted myself The Vital Question in 2015 December. While Lane writes effectively without any mind-numbing jargon, the book still has quite a bit of technical chemistry (understandably). After the excellent first 80 pages, it took me a lot more will power to plough through. (I paused at page 112 to get back later.)

Once when I was reading the book on a plane, a seasoned biologist happened to be sitting next to me. When I told that it's the first book of Nick Lane that I picked up, he said: "I'd rather suggest you to pick up Laine's other book, Life Ascending, and only then get back to The Vital Question."

PS: FWIW, I've previously mentioned the above in an older thread, where an ex-biochemist chimed in to confirm the above advice: https://news.ycombinator.com/item?id=18714115


Not an expert but the Wikipedia on it is quite good on the various schools of thought.

Edit - I just lost 20 mins reading the start of https://en.wikipedia.org/wiki/RNA_world which is interesting on that stuff


Fourier Transforms. I'd wish I had a intuitive understanding of how they work. Until then I'm stuck with just believing that the magic works out.


The best way to understand the Fourier transformations is to think of them as change-of-basis operations, like we do in linear algebra. Specifically a change from the "time basis" (normal functions) to the "frequency basis" (consisting of a family of orthonormal functions).

Here is the chapter on Fourier transforms from my linear algebra book that goes into more details: https://minireference.com/static/excerpts/fourier_transforma...

As for the math, there really is no other way to convince yourself that sin(x) and sin(2x) are orthogonal with respect to the product int(f,g,[0,2pi]) other than to try it out https://live.sympy.org/?evaluate=integrate(%20sin(x)*sin(2*x... Try also with sin(3x) etc. and cos(n*x) etc.


> As for the math, there really is no other way to convince yourself that sin(x) and sin(2x) are orthogonal with respect to the product int(f,g,[0,2pi]) other than to try it out https://live.sympy.org/?evaluate=integrate(%20sin(x)*sin(2*x.... Try also with sin(3x) etc. and cos(n*x) etc.

I disagree with that. It's pretty easy to prove it in general by calculating \int_0^{2\pi} sin(mx)sin(nx) dx etc. for m ≠ n.


I would count an analytic solution as in the "trying out" category (actually the best kind of trying out!).

The "no other way..." was referring to me not having an intuitive explanation to offer about why an sin(x) and sin(2x) are orthogonal.


Sure but the hard part to understand and accept is that ANY squiggle can be represented by a weighted sum of sinusoids...I mean, that's really an amazing insight, and I don't think it's obvious even after-the-fact. Forget about the details of computing coefficients - just the fact that it works at all remains counter-intuitive to me. (A neat visualization would ask the user to wiggle their mouse, producing a sparkline of some finite length, and then, in real-time, update a frequency domain representation of the motion, perhaps represented as a bunch of connected circles that rotate steadily but at different rates to produce an equivalent graph)


Some quite careful math for Fourier series is in Rudin, Principles of Mathematical Analysis. Fourier series applies to a function on the whole real line that is periodic, that is, repeats exactly once each some number of seconds. For the math, need only one period so in effect throw away the rest of the function and, indeed, really need the function defined only on the interval of some one period.

For some intuition, consider music, especially on a violin. Fourier series applies to a periodic function (wave), and represents the whole wave as sine waves that fit the one period exactly. So, get sine waves at frequency 1, 2, ... that of the period. In music, these waves are called overtones.

Playing with a violin, the overtones are fully real and even important! E.g., get a tuning fork and tune the A string (second from the right as the violinist sees them) to 440 cycles per second (440 Hertz, 440 Hz). Then the D string, the next to the left, is supposed to have frequency 2/3rds that of the A string. So, bow the two strings together and listen for the pitch 880 Hz, that is, 3 times the desired frequency of the D string and twice that of the A string. So are listening to the second overtone of the D string and the first overtone of the A string; are hearing the third Fourier series term of the D string and the second Fourier series term of the A string. Adjust the tuning peg of the D string until don't hear beats. If the D string is at, say, 881 Hz, then will get 1 beat a second -- so this is an accurate method of tuning. Similarly for tuning the E string from the A string and the G string from the D string -- on a violin, the frequencies of adjacent strings are in the ratio of 3:2, that is, a perfect fifth. That's how violinists tune their violin -- which is needed often since violins are just wood and glue and less stable than, say, the cast iron frame of a piano.

For one more, hold a finger lightly against a string at 1/2 the length of the string and hear a note one octave, twice the frequency, higher. That's often done in the music, e.g., playing harmonics. And it's a good way to get the left hand where it belongs at the start of the famous Bach Preludio in E-major that starts on the E half way up the E string. Lightly touch one third of the way up the string and get three times the fundamental frequency, sometimes done in music to give a special tone color. Net, Fourier series, harmonics, and overtones are real everyday for violinists.

E.g., on a piano, hold down a key and then play and release the key one octave lower and notice that the strings of the key held down still vibrate. The key vibrating was stimulated by the first overtone of the key struck and released.

The Fourier integral applies to functions on the whole real line. Very careful math is in Rudin, Real and Complex Analysis.

Yes, Fourier series and integrals can be looked at as all about perpendicular projections of rank 1 as emphasized in Halmos, Finite Dimensional Vector Spaces, written in 1942 when Halmos was an assistant to John von Neumann at the Institute for Advanced Study. That Halmos book is a finite dimensional (linear algebra) introduction to Hilbert space apparently at least partly due to von Neumann. So, right, Fourier theory can be done in Hilbert space.

Fourier integrals and series are very close both intuitively and mathematically, one often an approximation to the other. E.g., if multiply in one (time, frequency) domain, then convolve in the other (frequency, time) domain. E.g., take a function on the whole real line, call it a box, that is 0 everywhere but 1 on, say, [-1,1]. Well the Fourier transform of the box is a wave, roughly a bell curve, that goes to zero quickly away from 0. A convolution is just a moving weighted average, usually a smoothing. Then given a function on the whole real line, regard that line as the time domain and multiply by the box. Now can regard the result as one period, under the box, of a periodic function to which can apply Fourier series. And in the frequency domain, the Fourier transform of the product is the smoothing with the Fourier transform of the box of the Fourier transform of the function and, then, an approximation of Fourier series coefficients of a periodic function with the one period under the box. Nice. That is partly why the fast Fourier transform algorithm is presented as applying both to Fourier series and the Fourier transform.

Mostly Fourier theory is done with an L^2, that is, finite square integral, assumption, but somewhere in my grad school notes I have some of the theory with just an L^1 assumption. Nice notes!

Essentially the Fourier transform of a Gaussian bell curve is a Gaussian bell curve -- if the curve is wide in one domain, then it is narrow in the other.

The uncertainty principle in quantum mechanics is just Plancherel's theorem from Fourier theory.

Can do a lot with Fourier theory just with little pictures such as for that box -- can get a lot of intuition for what is actually correct.

I got all wound up with this Fourier stuff when working on US Navy sonar signal processing.

Then at one point I moved on to power spectral analysis of wave forms, signals, sample paths of stochastic processes, as in

Blackman and Tukey, The Measurement of Power Spectra: From the Point of View of Communications Engineering.

Can get more on the relevant wave forms, signals, stochastic processes from

Athanasuis Papoulis, Probability, Random Variables, and Stochastic Processes, ISBN 07-048448-1.

with more on the math of the relevant stochastic processes in a chapter of

J. L. Doob, Stochastic Processes.

Doob was long a leader in stochastic processes in the US and the professor of Halmos.

At one time a hot area for applications of Fourier theory and the fast Fourier transform was to looking for oil, that is, mapping underground layers, as in

Enders A. Robinson, Multichannel Time Series Analysis with Digital Computer Programs.

Quickly antenna theory depends deeply on Fourier theory so can do beam forming, etc.

Can also see

Ron Bracewell, The Fourier Transform and its Applications.

Of course, one application is to holography. So, that's why can cut a hologram in half and still get the whole image, except with less resolution: The cutting in half is like applying that box, and the resulting Fourier transform is just the same as before except smoothed some by the Fourier transform of the box.

As I recall, in

David R. Brillinger, Time Series Analysis: Data Analysis and Theory, Expanded Edition, ISBN 0-8162-1150-7,

every time-invariant linear system (maybe with some meager additional assumptions) has sine waves as eigenvectors. That it, feed in a sine wave and, then, will get out a sine wave with the same frequency but maybe with amplitude and phase adjusted.

So, in a concert hall, the orchestra plays and up in the cheap seats what hear is the wave filtered by a convolution, that is, with the amplitudes and phases of the Fourier transform of the signal adjusted by the characteristics of the concert hall.

In particular, the usual audio tone controls are essentially just such adjustments of Fourier transform amplitudes and phases.

Since there a lot of systems that are time-invariant and linear or nearly so, there is no shortage of applications of Fourier theory.


Thank you, that's very helpful!


3blue1brown has a great video on the topic: https://youtu.be/spUNpyF58BY


I have to say that video completely changed the level of my understanding about it. Especially the bit of visually intuitively understanding why the imaginary terms are the integration of the wrapping of the frequency component. Well worth watching.


This really depends on the level of math you're expecting for your intuition, but for me it really clicked when I understood it in terms of linear algebra.

A function is like a vector, but instead of having two or three dimensions you have a continuous number of them. Adding functions component-wise works just like adding vectors.

Just like regular vectors, you can choose to represent functions in a different basis. So you choose a family of other functions (call it a basis) that's big enough to represent any other you want. For a lot of reasons [1, 2], a very good choice is the set of complex exponentials g_w(x) = exp(2πiwx), for every real w. It's an infinite family, but that's what you need to deal with the diversity of functions that exist.

So you try to find the linear combination of exponentials that sum to your original function. You need a coefficient for each w, so call it c(w) for simplicity. After fixing the basis, the coefficients really have all the information to describe your function. They're an important object, and we call c(w) the Fourier transform.

How do you find the coefficients? Just project your original function onto a particular exp(2πiwx), that is, take the inner product. Usually the inner product is the sum of the products of coefficients. Since functions are continuously-valued, you use an integral instead of a sum. This is your formula for the Fourier transform.

I known there are technical conditions I am glossing over, but this is the intuition of it for me.

[1] There is an intuition for these exponentials. Complex exponentials are periodic functions, so you are decomposing a function in its constituent frequencies. You could also separate the exponential into a sin and cos, and will obtain other common formulas for the Fourier transform.

[2] Exponentials are like "eigenvectors" to the derivative operation (taking the derivative is just multiplying by a constant), so they're really useful in differential equations as well.


What's the difference between the coefficients of the furier basis and the weights of a neural network ? Both are ways to approximates functions, aren't they?


the difference is the basis that is chosen. Fourier use sin and cos as a basis (or equivalently complex exponentials). You can choose other bases and get wavelets, or hermite functions, or any other particular independent functions.

Weights on neural networks don't have to be independent functions.

Independence gives you a set of mathematical guarantees that insure you fully cover the space you're representing. For example that given a 2 dimensional space, X and Y are pointing in different directions. If they pointed in the same direction you could not fully decompose all vectors on the plane into two coefficients of X and Y.


Every analytical function, like f(x)=x^2-log(x+1), or signal, like a radio signal, can be rewritten as as infinite sum of sines and cosines. The Fourier transform helps you break down these components for you


It's magic. But here's a fun interactive explanation: http://www.jezzamon.com/fourier/


There are two types of Fourier magic.

1. The magical orthogonal basis functions: complex sinusoids. Shifting of a time signal just multiplies the Fourier counterpart by a new phase (relative to its represented frequency). Thus transforming to the Fourier basis enables an alternate method of implementing a lot of linear operations (like convolution, i.e. filtering).

2. The magic of the fast implementation of the Discrete Fourier Transform (DFT) as the Fast Fourier Transform (FFT) makes the above alternate method faster. It can be most easily understood by a programmer as a clever reuse of intermediate results from inner loops. The FFT is O(N log N), a direct DFT transform would be O(N^2)

A mathy demonstration of this at https://sourceforge.net/projects/kissfft/


I had a professor describe a FT as a "dot product of a function against the real signal space". Thus a FT is valued higher at frequencies where the input signal is more "similar" or "in line" with that frequency. Conversely, the FT is zero where there are none of those frequencies in the input signal.

If this helps, then it can also help with understanding other projections such as the Laplace transform (a dot project against the complex signal space).

While this analogy has helped me, I still have no clue why real valued signals result in an even FT.

edit: grammar


The intuition of Fourier Transforms is that any continuous, repeating waveform can be recreated by the addition of harmonics of a sine waveform eg a square wave, IIRC, is the addition of the odd harmonics.

The cool thing about this insight is that the converse is true. You can disaggregate any waveform into its additive harmonics. This means you can jam multiple signals into a single channel (eg a fibre optic cable) and then apply a fourier transform at the other end to "untangle" them.


Ok, my time

If you see any signal, it can be represented as a value at each time, x(0) = 1, x(1) = 2 .. x(100) = 5 etc. We can visualize them as you shouting 1 at time 0, 2 at time 1 and 5 at time 100. Alternatively we can also do the same with a larger number of persons.

Representation using dirac delta

--------------------------------------

Lets say that you have 100 persons at your disposal. You ask first person to shout 1 at time 0, second person to shout 2 at time 1 and person to shout 5 at time 100. Other times they will be silent. So with these 100 people you can represent the signal X. We call each of these person as bases. Mathematically they are delta functions of time, ie they get activated only at their specified time. Other times they are silent, ie 0. The advantage of this representation is that you have fine control on the signal. If you want to modify value at time=5, you can just inform the 5th guy.

Introduction to bases

--------------------------

Dirac delta is not the only bases. You can ask multiple guys to shout at multiple times. They can even tell negative numbers. All you have to ensure is that they add up to the value of X. The guys should be able tell any number that can come as a part of X. This we name the property "SPAN".

Instead of 100 guys, we can have 200 guys too, ie 2 guys for each time and they tell half of the original value. However, this is wasteful since you have to pay for extra guys with no use. Hence we say that the bases should be orthogonal, ie they should not have correlation with others in the group. So as we have uncorrelated and spanning guys, we can represent any signal using them.

Fourier transform

--------------------------

In case of Fourier transform, each guy will shout according to a sinusoidal wave. Lets say sine wave. ie guy 1 at time 0 will tell the value of sine(f0 t). Second guy will shout value of sine(f1t) and so on. The f0, f1 etc are the frequencies for each guy. Now it comes out that these guys will be orthogonal to each other, and they can span all the signals. Thus we have Fourier transform. Hence instead of representing signal as value at each step, we can represent it as value at each frequency.

Why Fourier transform

-------------------------

We have seen that as long as bases span and and are orthogonal, they can define a transformation. But why is Fourier transform so famous. This comes from the systems we use. The most common systems we use are LTI(Linear time invariant) systems. A property of the said system is that they work on sinusoidal waves. Ie if a sinusoidal wave of frequency f is passed through an LTI system, all it can do is to multiply with a scalar. Any other wave will have a more complex effect. Hence if we can represent signals as a sum of sinusoids, we can represent our system as just a amplifier at each frequency. This makes whole of system analysis into a set of linear equations which we are good at solving. So we love Fourier transform


Systems == electrical + electronic systems


• Magnetism. There are plenty of videos out there calling it the result of a relativistic charge imbalance. But I've never been able to use this point-of-view to practical use cases like understanding how permanent magnets work or how increasing the number of windings in inductors boosts the magnetic field strength. There were more situations I tried to put this POV into use but I can't remember them off the top of my head.

• Qualia. What is this subjective experience that I know as consciousness? I've gone through Wiki, SEP and a fair number of books on philosophy and a few on neuroscience but I still don't understand what it is that I experience as the color "red" when in reality it's just a bunch of electric fields (photons). Why can't I get the same experience — i.e., color — when I look at UV or IR photons? These too are the very same electric fields as the red, blue, green I see all the time.

• Photographic composition. I'm a designer. I know them. I use them. But only empirically. I just do not understand them at a neuroscientific level. Why does rule-of-thirds feel pleasing? Is the golden ration bullshit? My gut says yes but I'm unable to come up with a watertight rebuttal. Why do anamorphic ultra-widescreen shoots feel so dramatic/cinematic? Yet to see an online exposition on the fundamental reasons underlying the experience. Any questions to artists are deflected with the standard "It's art, not science" reply.

• Wave-Particle duality. "It's a probability wave that determines when a particle will pop into existence out of nothingness." okay, where exactly does this particle come from? If enough energy accumulates in a region of empty space, a particle pops into existence? What is this "energy"? What is it made of? What even is an electron, really? I've followed quite a few rabbit holes and come out none the wiser for it.

• Convolution. It's disappointing how little I understand it given how wide its applications are. Convolution of two gaussians is a gaussian? Convolution in time domain is multiplication in frequency domain and vice-versa? How do these come out of the definition which is "convolution is sliding a flipped kernel over a signal"?


RE: Convolution

I think this was a pretty neat explanation:

https://sites.google.com/site/butwhymath/m/convolution

The problem with convolutions, like many things in science, is that how you learn it, depends on what you're studying. Same theory, but with N different explanations, which can cause confusion if some of them are very different and tough to connect (i.e learning convolutions in a physics class vs leaning one in a statistics class)


re: Wave-Particle duality : Imagine that the "particle" in its natural state is a wave of energy that pervades spacetime. Assume we are interested in forcing it to reveal itself as a localized "particle": the amplitude of the its waveform as a function of space and time will tell you the probability of it revealing itself as such in that location at that time. Note the wave form is complex, so in order to get the actual probability you have to calculate the magnitude.

The idea is that it is in this waveform until you do something to observe it. Observation requires an exchange of energy, i.e., an interaction. This is why there is always uncertainty, because in order to observe something, that which observes has now intimately interacted with the waveform in order to cause its "collapse" into what we consider to be a particle. A particle very well may just be a highly localized energy that we perceive to be "solid"

You can't, for example, try to get a measurement of the location of a photon without putting a measuring device which absorbs the energy of the photon, modifying its wave function and thus the probability of where it will decide to reveal itself at that point in spacetime.

note: I consider energy, fundamentally, to simply be the consequence of fluctuating. The fluctuation of one thing can interact with the fluctuation of another and, minding conservation, transfer "fluctuation". The direction of that transfer of energy relying may be due to the fact that entropy always increases along the arrow of time, i.e., energy likes to spread itself out just as heat goes from high concentration to low concentration.


re: Qualia

I can't say this will necessarily assuage your curiosity about consciousness, but I mostly stopped being overly curious about this once I realized that, it's likely only a manifestation of the aggregation of all of the individual sensory experiences our bodies have.

In other words, when you think about planet-scale phenomena such as how humans more or less all feel "connected" and non-hostile because civilization (in the most advanced countries) has reached a point where hostility is no longer essential for survival. That "experience", for each of us, is ours alone but it seems to be so ubiquitous that we can't take credit for that experience or insight as individuals. It leads me to believe a large part of our conscious experience of the world is shared and independent of the brain's capacity. More precisely, humans are (universally) experiencing phenomena that are independent of our brain's capacity to process and understand them.


The issue with wave-particle duality is that most of us think about it backwards.

The universe is actually made of quantized fields. Both particles and waves are imprecise models/approximations. There's no such thing as a particle, instead there are just excitations of this field which we cannot measure with complete accuracy.


> Which we cannot measure with complete accuracy

I very much dislike this phrasing, because it suggests that it is just us that are not capable of building an apparatus to enable us to do so.

Imagine a gear transmission or a lever: You can transform distance into force and vice versa. It is up to your choosing if you want to go with more speed or more force by changing the point along the lever, where your transmission happens. It is not possible to build a transmission, which gives you the most distance and the most force simultaneously. In this system of transmission, one is the other, just a different perspective.

And it is the same with the location and impulse of a quantum. You can choose to have more information in the shape of location or more in the shape of impulse by changing your measurement (like the point along the lever). But you can't have both, because there is only a constant amount of information which is represented in a combination of location and impulse.

Actually, the uncertainty part of heisenberg uncertainty principle is a purely mathematical limitation (called Gabor limit) and only the Planck constant makes it physical.

Gabor limit: a • b >= 1/(4•PI)

Heisenberg's uncertainty principle: a • b >= h/(4•PI)

So the Planck constant is kind of the maximal sampling resolution of the fields / signals in our universe.


Good point


Lot's of quantum related phenomenon here, but what keeps bothering me is that while I get it that light is both a wave and a particle, but I have no clue what that means. I mean, a wave of sound, is made from air particles, a wave of ripples in a pond is made of movement of water molecules. In the double slit experiment, it's explained that the single photon has to be "interfering with itself", so I don't get it if by being a "wave" it means that the single photon is basically a bunch of "magic" photon ghosts that behave like a wave, but once it is measured or any other reason to "collapse" these ghosts "disappear". I just don't get what the "wave" of light/radio wave is. Is it just an abstract concept of something that behaves like a wave but not the same as sound waves / ripples since we simply don't know? Or is it just a wave of these "not yet collapsed" probabilities of the photons locations that are interfering with each other right until we ask them to choose a location, then they just collapse magically into a single "real" photon. Another thing I don't get is in the double slit experiment, a LOT of the measure before, measure after, etc, are told to be thought experiments, but it's also claimed that someone managed to actually replicate them. Why isn't there a video showing it? I obviously believe they happened, and understand why more or less (e.g. in the one photon at a time experiment, it's spooky that over time you get the same pattern that indicates interference as if you shoot many) but the more spooky result is that thought experiment, that if you measure which slit the photon actually traveled through, you'll see 2 slits on the screen vs the famous pattern. e.g. you'll cause the wave to collapse back to particles. So any video of reproducing of that thought experiment or explanation why it's so hard to reproduce, will be super helpful.


> while I get it that light is both a wave and a particle, but I have no clue what that means.

Because it's wrong. It's a quantum of the electromagnetic field. It's neither a wave nor a particle. It just happens to have some properties of both.


I've done plenty of layman reading into QFT, and I'm truly good with this. All particles are really waves. There are no particles. There's self-interaction, and all other kinds of weird stuff. (it's important to qualify that stable orbits and molecules and stuff exist)

But for the duality, there's something bigger that the responses always seem to blow past. Is wave-like nature for explaining behavior (wavy double-slit intensity pattern), or is it something to have a mathematical mapping to measured probabilities?

Quantum stories always seem so backwards. The root phenomenon is some sort of irreducible probability. But then the mechanical part (inference in double-slit) goes a totally different direction. Instead of just turning the situation into a probability of one-or-the-other slit, it STAYS as a wave.

Okay, now you have a new hole in the story. If the photon refuses to choose just 1 slit to go through, why does it choose 1 spot on the photo paper to land on?

Why do we not still have to consider interference in outcomes after the photon makes its mark on the paper? Why does there appear to be like a limit on entanglement, such that it goes away beyond a certain scale? Why are quantum computers hard?


> If the photon refuses to choose just 1 slit to go through, why does it choose 1 spot on the photo paper to land on?

The photon (as a field excitation) goes through both slits, but is quantized so only has enough energy to trigger a mark at 1 spot on the photo paper.

> Why do we not still have to consider interference in outcomes after the photon makes its mark on the paper?

If we want to be completely accurate, we should. However so many interactions happen so quickly that the law of large numbers quickly takes over and obfuscates the quantum reality. Technical term for this is decoherence.

> Why are quantum computers hard?

Exactly because of this decoherence. It is very difficult to keep the qubit state isolated from the environment throughout the computation.


Sorry, don't have time to unpack all this, as some are things that scientists understand, but some are so deep that ... not.

When talking about light or radio waves, you're probably talking about Classical Electrodynamics, which does not include the notion of a photon. In Classical EM, light is an EM wave in the sense that if you look at the electric field E or magnetic field B (of a plane wave) at a fixed point in space it is varying sinusoidally. So the waviness is in the amplitude of the E and B fields.

Once you talk about photons, you're in the realm of Quantum Mechanics (QM), and yes things are harder to understand.

It's actually all just fields according to the Standard Model (particle physics), a quantum field theory (QFT).

In QFT there's a field for each fundamental particle that permeates the whole universe. E.g. an electron field, a photon field, etc. Disturbances in these fields are what one would call particles in non-relativistic QM.

So Classical -> QM (quantum system, classical observer/apparatus) -> QFT (quantum everything)

In classical EM, light is a wave. In QM, light is particles. In QFT particles are just disturbances in the all-pervading fields.

Binney has said that QM is just measurement for grownups, or some such. What is a measurement? It's when the system you're observing becomes entangled with the measuring device. We don't know the exact state of every atom in our measuring device, but these could all perturb the system we're measuring. So QM is a hack where you treat the system as quantum but the observer/measuring device as classical which is why you need this confusing wave-function collapse. It was a conscious choice in the development of the theory. This last bit might give some insight into why trying to sense the photon at one of the double-slits ruins the interference pattern.


Wow, any chance with a few MooCs in physics I’ll be able to understand this better? Or it needs years of study. I’m worried about the math mostly (Ms in CS, but I got Bs in all the math classes :))


The math should be no problem if you go to the right sources. Some people like to play up the math, but it sounds like you're well prepared.


I found these two videos very helpful in understanding the quantum nature of light after being stuck in the same spot: https://www.youtube.com/watch?v=zcqZHYo7ONs https://www.youtube.com/watch?v=MzRCDLre1b4 (Watch those in order, because they're a collaboration between two YouTubers)


I think this is more about the terminology than the actual concepts. This is like in biology, where you classify things as living and non-living, but then you encounter things like viruses, which would depend on the criteria for a "living thing." Or, is a gel a solid or a liquid? Again, it has properties of both states of matter. Similarly, in order to classify phenomena as waves or particles, you look for common attributes between the things you observe. But light exhibits properties of both.


A sibling comment has mentioned this but the problem is with the terminology and how it manifests itself in your mind. You’ve mentioned it in your comment: trying to view light as a ripple or as air particles... light is neither of them, it’s a different thing entirely. Light exhibits certain properties that are wavish and particle-ish which is why it’s said to be both, technically. But trying to visualize light through analogies with phenomena in the macro world that you have intuition for is wrong. Mathematics is the only “eye” with which you can really understand light for what it is.


Flight. Apparently "air flows faster on the top side of the wing, lowering the pressure" is an incomplete explanation; I even heard we don't completely understand why it works (?!?).


The now classic Bernoulli vs Newton debate.

"Air flows faster on top" is the Bernoulli explanation. The Bernoulli principle tells us that fast air means low pressure, and low pressure sucks the plane up.

Newton explanation is the idea that the wing pushes the air down, and by reaction, pushes the plane up. Based on Newton's third law.

In reality, both are correct. The Bernoulli explanation is more specific and the Newton one is more generic. But if you want the whole picture, you need the Navier Stokes equations. Unfortunately, these are very hard to solve, so even engineers have to use simplified models.

I personally prefer the Newton explanation. It explains less, but the Bernoulli one is confusing and results in many misunderstandings. For example, that air takes the same time to follow the top side and bottom side of the wing, which is completely wrong.

The common depiction also tends to hide the fact that the trailing edge of wings is at an downwards angle, even though it is the most important part. Nice profiles make wings more efficient, but the real thing that makes planes fly is that angle, called angle of attack.

Focusing on the profile rather than on the angle of attack leads to questions like "How can planes fly upside down?" (the answer is "by pointing the nose up", and that should be obvious). If you are just trying to understand how planes fly, forget about wing profile, it is just optimization.


What is it about a wing that can take 100 pounds of "thrust" (and I may not know exactly what "thrust" is), and use it to keep a 1,000 pound aircraft in the air?

I want to go up. I want to use the thrust I have available to achieve that. Would not the most efficient use of the thrust available be the direct and naive approach, of pointing the engine straight up/down? Nope.

Instead, we point the engine horizontally; literally orthogonal to our desired goal. Then we use these "wing" things - they're not complicated, they're just rigid bodies with a shape, which honestly isn't even that unusual of a shape. Now we're not only able to go up (we finally achieve our goal), but we get to go fast in some horizontal direction as well.

I haven't found an explanation for this that feels satisfying to me.


First thing, propellers are wings, they work using exactly the same aerodynamic principles, except the "lift" goes sideways. And airliners use propellers (called fans here), the "jet" part of their engines only provide a fraction of the total thrust.

Now why not use a propeller pointing directly straight down? Well, you just made a helicopter. Helicopters are great, but they are not as fast as airplanes, the main reason is that as it goes forward, one part of the rotor is advancing and the other is retreating, this causes a whole lot of difficulties that doesn't appear when the propeller is mounted sideways.

Now propellers aren't the only way of producing thrust. There are jet engines, but these require significant airspeed in order to be efficient, and you usually have much more airspeed horizontally than vertically.

You can have rocket engines, which are great if you want to get really high, really fast, but they have to carry their own reaction mass, which is impractical in most situation.

Also you can use buoyancy as a form of "thrust", you now have an airship. Efficiency-wise, it is unbeatable. Unfortunately airships are big and slow and not very suited to modern requirements.

As you can see, there is absolutely nothing preventing us from thrusting downwards, it is just that airfoils are very efficient.

Back to your first question: how can 100 pounds of thrust keep a 1000 pound aircraft in the air. Without going into details, it is the same idea as a lever or gearbox (mechanical advantage). We rarely think of it this way for the wings of an airplane, but for propellers, it is a more apt comparison. A variable pitch on a propeller is like a gearbox for your car, and as seen earlier, propellers work exactly like wings.

As for what "thrust" is, it is really just a force, often shown together with with drag, lift and weight, it is provided by the engine. But in the end, there is nothing special about thrust, you can reorganize your forces anyway you want using simple vector math. For example gliders don't have thrust, and they still fly, taking advantage of updrafts.



this is a pretty good detailed explanation: https://fermatslibrary.com/s/how-airplanes-fly-a-physical-de...

the summary being:

- The vertical velocity of the diverted air is proportional to the speed of the wing and the angle of attack.

- The lift is proportional to the amount of air diverted times the vertical velocity of the air

it also debunks the myth of "air flows faster on the top side of the wing, causing lift"


Aerospace engineer here:

What a lot of people don't know is that the wings are actually installed on a small upward incline, relative to the longitudinal axis of the body. Think of holding your hand out the window of a moving car, and then tilting your hand to catch air under your palm. In aerospace we call this the Angle of Incidence, and most aircraft have a small amount, usually in the 1-5 degree range. So while you might be walking on a perfectly horizontal path as you go to the bathroom over the Atlantic on your way to Paris, the wings keeping you aloft are actually angled such that the leading edge is higher than the trailing edge by a small amount.

Now google any picture of an airfoil and notice that many of them are slightly concave on the underside. This is called Camber, and in a nutshell it creates a "cupping" effect under the wing that intensifies the high-pressure area under the wing and correspondingly increases the amount of air deflected downward. Additionally, the teardrop shape reduces the tendency of air to billow off the trailing edge of the wing in favour of kinda sticking to the wing's surface and following its curvature. This also causes downwash off the trailing edge (i.e. more air going downward, which is a good thing).

That's really all there is to it, from a high level. The wings deflect air downward such that the total momentum change causes an upward force that is exactly equal to the aircraft's weight, and that equilibrium of forces keeps the aircraft aloft.

Obviously it gets more complex than that, because guys spend entire PhD careers researching edge cases, but there's no magic involved.

Note that wings don't have to be of the classic teardrop shape. There are plenty of research papers about lift forces on flat plates. In fact that's classic fodder for an undergraduate assignment. The airfoil shape is beneficial in several ways, some of them quite subtle, but you can think of the airfoil as being the most efficient cross-section for a wing known to science, whereas a flat plate is much less efficient (though it still works).

>I even heard we don't completely understand why it works (?!?).

I don't think that's true. For a while there was the meme about "science says bumblebees shouldn't be able to fly" but that was a clickbait headline because we didn't know enough about the structure and motion of bumblebee wings. That's about all I can think of.

There are certainly areas of ongoing research and exploration (I'm thinking hypersonic flight, novel means of propulsion, aeroelastic structures, etc.) but in general, the physics behind conventional aircraft are quite well-understood.


It directs air moving horizontally downwards, by conservation of momentum the wing must get additional upwards momentum, called lift.


That can't be the explanation, otherwise wings wouldn't need to be curved - flat wings could fly, as long as they're tilted to redirect the air.


Flat wings can and do fly just fine, they are just a bit less efficient. The teardrop shape and camber/cupping underneath just make the winds more efficient at slicing through the air without creating as much turbulence and drag.


My understanding is that "less efficient" here mean that flat wings have, specifically, less desirable stall behavior. Flat wings will stall more easily than an appropriate shaped "teardrop" wing.

A "stall" happens when the wing is no longer directing air downwards (and thus not providing lift), and is instead just chopping up in the air into turbulent chaos without any consistent direction.


I think this is actually a good explanation of the lift force, but lots of other factors come in to play for wing shape. Two other big factors are drag forces, which are dependent on the surface area, air density and the velocity of the craft, and so there's a complicated optimisation problem there, and turbulence, which depends a lot on the wing tilt, and the the shape of the wing.


That is, in fact, true. Planes can fly with symmetrical wings, including flat wings, and even with upside-down wings, as is easily demonstrated by observing stunt planes actually flying upside-down--in all cases, as long as they are appropriately tilted to redirect the air. Most purpose-built stunt planes even actually have symmetrically-curved wings, because the reduction in right-way-up aerodynamic performance is made up for by the increase in inverted performance.


Paper airplanes fly, and they have flat wings!

It turns out that flat wings work just fine, but the airfoil shape we see on airplanes is more efficient:

http://warp.povusers.org/grrr/airfoilmyth.html


I'm not too sure it won't work, but I'm pretty sure it won't be efficient. Or maybe your plane will just rotate until the wings are horizontal again.


Flat wings can fly. E.g. kites.


I've been informed that I'm part of the problem. Comment removed, sorry for the trouble, folks!


>The top of a wing is curved, making it longer than the bottom of the wing. This means that air takes longer to go over it, meaning it has to spread out further to go the same distance as the air under the wing. As a result, the air going over the top of the wing is less dense, (aka lower pressure). The wing tries to equalize the pressure by moving in the direction of the low pressure, which is Up. We call this Lift.

100% completely false.

Imagine you have two particles of air, and they are immediately adjacent to each other. Suppose now that one goes above the wing, and one goes underneath. In your example, the particle going upward goes further in the same amount of time.

But ask yourself this: Why do the particles of air have to arrive at the same time? What mechanism from physics requires that they meet up again at the far end of the wing?

Then ask yourself this: If what you described is true, then how do aircraft fly upside down?


For years I thought I was crazy or stupid, whenever I heard this story of "air has to go faster" I was like "but how does air know?? It's not like it has a Google maps plan telling it it needs to reach the other end of the wing at a precise time!"

By chance, in the last few years I've started reading more and more comments debunking this absurd explanation. Not that I understand perfectly now, but at least I know I'm not crazy.


In fact that "air has to go faster" silliness only works if you completely neglect air friction entirely, because then the air can be said to part around the wing like butter around a hot knife.

But of course anyone who's seen snow billowing off the back of a car knows that air doesn't just close up behind the object like a ziplock bag: it's messy and turbulent and gets all over your windows while you're tailgating.


That's not how lift works, don't be part of the problem! http://cospilot.com/documents/Lift.pdf


It's a couple of different phenomena, all at the same time. Let's start with the most accessible. Your hand out the car window.

Flat hand, you feel a pressure at the front of your hand. At the back you should notice is a bit dry.

The pressure at the front is dynamic pressure. The gas piles up as your hand plows into it at speed. The pressure you feel is the mass of air you're picking up and carrying with you. The dryness at the back (you don't feel it per se, but you can notice it) is the resulting area of low pressure created by you plowing through the air. This is drag.

Now. Tilt your hand in the stream, and up your hand will go! The ways you can break this down/visualize it are varied, but in reality are all manifestations of the same phenomena.

Newtonian/Conservation of Energy: Each particle of air impacting the bottom of your hand is +1, each impacting the top is -1. +1 & -1 don't neutralize, so up you go.

The vacuum visualization: imagine a density visualization overlaid on the situation. There's a vacuum bubble over the top of your the hand. Nature hates a vacuum, so everything tries to fill it. The end result of that filling, is that air particles that would otherwise be slamming into the top of your hand get "sucked" into the bubble instead. This is important, because without this understanding, you can't account for things like dumping energy into the flow stream via a spinning shaft or the infamous UFO X-plane, where all the engine power was devoted to keeping air flowing faster over the top surface, allowing the darn thing to get lifted by the relatively unaccelerated air beneath even at 0 velocity of the machine relative to the environment. The key to all lift is making that asymmetry in airflow.

Symmetric airfoils can create lift at Angle of Attack, because while they are symmetric at 0 degrees, they aren't at angles offset from dead on.

There are also some weird degeneracies that you can take advantage of, like using a spinning cylinder and flat strip of material just barely offset from it to create lift with near zero relative forward speed of the apparatus to the surrounding space. (This is a function of viscosity, and the energy of the spinning rod is basically picking up the fluid and accelerating it, it separates from the cylinder and follows the strip of material creating a pressure differential, ergo lift).

Then there is the whole bit about about vortex circulation etc, the main thing to remember though is that air that is trying to fill a void created by an object moving through the air is too busy doing that to neutralize the energy gained by air transferring energy to the bottom of the lifting surface. Ergo, lift. Further, the useful "lift" you make, the more "drag" you'll create as well, because in order to maintain that vacuum you're coaxing all that air on the topside of the lifting surface to head into instead, you have to account for the energy expended in 'picking up and carrying' that air/fluid with you.

Fluid dynamics is weird, complicated, and seems like black magic, but at the end of the day it's all about what you convince the fluid to do instead of smacking into you.

There are gobs of seriously bloody weird equations all around it, but they are mostly useless in terms of being able to visualize what is going on.

Imagining a bubble sucking the lifting surface upward, and the airflow on the bottom pushing the lifting surface upward like a stone skipping on water on the other hand? Gets you good mileage on being able to imagine things.

The vacuum visualization is even more relevant at supersonic speeds, as at that point, your "flight regime" becomes "exotic chemistry occurring is a compressed flow" and an ever increasing column of air getting picked up and carried along with you as you rip a gigantic hole in the atmosphere and carry it along with you; turning aircraft operation into a balancing act between skipping off the atmosphere correctly, and not becoming part of the exotic chemistry you're causing.

Once you get the rudimentsdown though, everything becomes averageable vectors, which makes stuff like KSP with FAR a fun thing to mess with.


Flight is EXACTLY like swimming except in air and without any component of floating. You're welcome.


Why does time slow down/go faster with movement compared to another object.

The well known example that if you travel into space you'd gain let's say 5 years and people on earth 25 in the same time or so.

I just don't get it and I can't find any logic explanation.

For instance: Two twins who came to live exactly at the same moment in the year 2000 and both die on their 75th birthday at the same time. One travels into space, the other stays on earth. Earth-brother dies on earthyear 2075,space-brother dies in earthyear 3050 or so...

I know its Einstein's point but that just doesn't instantly make it correct to me.


All things in the universe have four measures of velocity. Three of these are easily observable to us in terms of x, y, and z movement. To understand time dilution we need to realize that we are also physically traveling through time.

The total velocity of our <x, y, z, t> vector will always be equal to the speed of light constant, c. You can think of something that has no physical movements as moving forward in time at the speed of light. As x, y, or z increases the magnitude of t will decrease so that the speed of light constant is always achieved.

Why this link has to hold is more complex and I cannot explain it well, but hopefully this gives some insight into time slowing as velocity increases.


> The total velocity of our <x, y, z, t> vector will always be equal to the speed of light constant, c. You can think of something that has no physical movements as moving forward in time at the speed of light. As x, y, or z increases the magnitude of t will decrease so that the speed of light constant is always achieved.

I've gone decades without hearing it explained that clearly and simply. Thank you (sincerely).


Just to nitpick for a second, you shouldn't feel compelled to believe something because "it's Einstein's point." Francis Bacon established the foundation of the scientific method when he argued that knowledge means the ability to predict or create some effect reliably. Now you probably do not have the ability to test special relativity experimentally, but you can read about the many successful predictions and applications derived from it. You shouldn't feel like you just "have to believe it," though.


Because space and time are just two aspects of the same thing. Your velocity isn't a 3-dimensional vector--it's actually a 4-dimensional unit vector, and the direction in which it points is what you consider to be the future. The non-constant nature of 3-velocities that we actually measure is just a result of the non-constant nature of projections of that 4-dimensional velocity vector onto different 3D spaces--i.e., different inertial frames of reference.

To get an intuitive idea of why this necessarily results in symmetrical time dilation, imagine two people walking along non-parallel paths at a constant rate on a 2D surface. From either person's point of view, the other person has a one-dimensional relative velocity, either towards or away from the observer, and that relative velocity depends on the angle between their paths. One-dimensional acceleration is just rotation in the 2D space. Now, what happens if you project one person's path onto the other person's 2-velocity? The projection will be shorter! And remember, the direction of your velocity is the direction of forward time from your perspective. So, from your perspective, the other person has traveled less distance along the time direction than you have, because some of their constant-velocity path was used up traveling in space instead. I.e., from your perspective, time has slowed down for them. But, projecting your path onto their velocity vector also results in a shorter path--so the effect is 100% symmetrical!

Now, this analogy fails in two ways because the real universe doesn't have any meta-time that you can use to observer where the other guy is "right now", and because spacetime rotations are hyperbolic rather than Euclidean, but those two sources of error happen to cancel out nicely and you get the correct result that moving objects appear to move through time slower.


Imagine the hands on an analog clock. As the hands rotate, there is an invariant -- the length of the hand. We are completely unsurprised that as a hand on the clock rotates and becomes less horizontal, it must become more vertical. It can't both increase its "verticality" and its "horizontality".

Now imagine horizontal as space, and vertical as time. In this case a 2D spacetime, but we can't really visualize 4D.

The reason they always talk about space-time in relativity is that you can't separate the two. If you want to travel faster through time, you have to travel slower through space. If you want to travel faster through space, you have to travel slower through time. There's an invariant like the length of that rotating clock arm called the "spacetime interval" that remains constant under the transformations that you have to do to go from one observer's perspective to another observer's perspective.

Problem is its in 4D so it's hard to visualize. There is a mathematical framework that can explain all of the transformations leading to length-contraction and time-dilation as simple rotations in a 4D spacetime (3 space + 1 time). It requires a bit more math, but then unifies things in a conceptually simple way.

But maybe just remember: "If you go faster through space, you go slower through time" "If you go faster through time, you go slower through space"

Your maximum speed in space is the speed of light, at which others will observe you as having no time passing.

Your maximum speed through time is one second per second, at which others will observe you as being stationary relative to them. Look up Alex Fluornoy's youtube video lectures. I'll edit this and link the specific one here later, if I can find it.


There are two effects here. One is special relativistic time dilation, and you can derive these effects with very little mathematics (high school is enough). The Wikipedia page has a simple proof, but it's important to realise that this is a result of the postulate that the speed of light is constant. If you travel at 0.99c and emit a photon, it still travels at c, not 1.99c. It's absolutely not intuitive why this should be the case.

The other effect is that time in a strong gravitational field runs slowly.


I am sorry but I don't get it. I read the formula and get the formula but that doesn't make it logical.

If you move away from a clock, time seems to slow down as your distance to the clock gets larger and the time between a change on the clock reaches you over a longer period. But if you carry a clock in your rocket it will just tick at the same pace as on earth (minus the gravitational impact, which is measured but why does gravity have an impact on time...?)


A core point of relativity is reference frames. From your perspective light in a vacuum always travels at c. The problem is that by special relativity, an observer moving at some speed relative to you will also see light traveling at c. This is a simple idea, but it causes a lot of very unintuitive effects. If we're being pedantic, relativity is very logical mathematically, but it's conceptually difficult because it flies in the face of how you think the world works.

Have a look at the simple inference example here: https://en.m.wikipedia.org/wiki/Time_dilation

Time doesn't necessarily slow down the further away you get from a clock. If you and a clock are both stationary (ie you're in the same inertial frame), you will observe it ticking in "normal" time, albeit delayed due to the distance. If the clock is moving relative to you however, you will measure its ticks to be slightly slower.

You may be confusing general relativistic effects which are distance dependent (as gravity weakens the further away you get).

If you carry a clock in your rocket, you will (in the rocket) measure it to tick once a second. When you get back to Earth, you'll find that it's lagged behind a clock that was started at the same time but was left on Earth.

Maybe have a look at simple wiki too https://simple.m.wikipedia.org/wiki/Special_relativity though it doesn't actually derive the Lorentz transforms unfortunately.

Ignore the gravity bit for now, that's general relativity and it's more complicated to explain.


try reading Brian Cox's "Why Does E=mc2? (And Why Should We Care?)". To understand this you will need at least 10 or 20 pages to set the stage before you can start to grasp special relativity.


What happens when you actually fall inside a black hole and what is the singularity.

I never really understood what happened really when the guy fell inside it in Interstellar and how come he started seeing all those photos. I just accepted it as Hollywood bs.

I know my question is based on a movie but would still like to know what will someone witness (assuming of course they somehow live)


I have not read it myself but the answer can probably be found in the book "The Science of Interstellar" [1]

Kip Thorne, a Nobel prize-winning physicist, worked as the science advisor for Interstellar so the hollywood bs is pretty good!

[1]: https://www.amazon.com/Science-Interstellar-Kip-Thorne/dp/03...


>What happens when you actually fall inside a black hole and what is the singularity.

Black holes were sometimes called "frozen stars", because time slows down at the event horizon to a stop. If light could escape from black holes and somehow the material continued to emit/reflect light despite time being stopped for it, then as outside observers, what we would see is the star at the moment it collapsed to the size of its event horizon. Though if you entered the event horizon, from your point of view you would see time progressing again inside.

An object falling into a black hole would often go through the process of "spaghettification", being pulled apart because the gravity from the black hole on the close side of the object would be immensely stronger than the gravity on the far side of the object. Though It's possible for a black hole to have an event horizon large enough that the point where spaghettification is beyond it; I think Interstellar had some line to imply that about their black hole.

>I never really understood what happened really when the guy fell inside it in Interstellar and how come he started seeing all those photos. I just accepted it as Hollywood bs.

In the movie, it was supposed to be that the same aliens (well, future humans) that had constructed the original wormhole had also connected another wormhole inside the blackhole. On the inside of the second wormhole, they brought him into a constructed environment to use him as a tool to encode a message to the past to himself that would be able to get him to arrive here in the first place. I think the vague idea of the future-humans was that they used the spacetime-warping nature of the black hole to somehow transcend space and time in our future, but as a nearly one-way trip: they could only interact with our universe / the past through warping gravity (vibrations or making wormholes), and inside the black hole, they had more freedom to do this, and they used that to make the interface for the main character. (Story-wise, I think it seems like good contrivance to allow the mysterious benefactors that ability to give some help, but without being able to do everything and still allow the relatable modern human characters to do all the interesting detail work.)


In another comment on this thread they explained that space and time are parts of the same vector, <x, y, z, t>, with velocity being the speed of light. Here you mentioned time is frozen at the black hole event horizon - does this have to do with the theory that we can use black holes as "portals" to distant parts of the galaxy? So if time is 0 and we have <x, y, z, 0>, does this mean we travel at the speed of light?


That was just Hollywood BS. But it made for a good story.

The singularity is a sign that General Relativity breaks down -- an artifact of the theory, not necessarily a physical thing.


> I never really understood what happened really when the guy fell inside it in Interstellar and how come he started seeing all those photos. I just accepted it as Hollywood bs.

Coupled with people saying "but they had scientists on staff! they talked with scientists that makes it so cool and accurate, lets ignore that other part"


Well I think there are two things here. Yes, they had black hole experts work on the film and even generate the film's blackhole images (iirc) from Kip Thorne plugging in relativistic formulas into Mathematica. That is probably what your friends are referring to. Of course, any Hollywood science fiction movie also has some make believe in it. In this case the inside of the blackhole allowing him to communicate with his daughter and therefore save himself didn't make a ton of sense (I would assume paradox, but I think the laws of physics break down in the singularity so they used some extremely heavy hand waving). That part isn't what your friends are likely referring to.

I thought the music, acting, and triumph of humanity were pretty inspiring much like Star Trek can be despite the fact that most of the technology violates the laws of physics. You may have thought it was a terrible movie which is fine. I thought Star Wars Rogue One was one of the most boring films I've seen in the last decade, but a LOT of people loved that film.


The movie's hype around "scientifically accurate" made the nonsensical low effort copout about offscreen wizards casting 6th dimensional time travel spells even more infuriating.


You should check out PBS Space Time (on youtube), there are several episodes explaining this in different ways.


Problem is, we have no physics yet that explains what happens "inside" the event horizon.


Singularity in AI?



No they kept saying that the center of it is called singularity IIRC.


Avogadro constant - I can accept the number, if someone can show me how, over a hundred years ago, without having every really seen a molecule, this number could be derived. Obviously no one ever sat there and counted them but I find it hard to believe, they could have decided this without being able to see it.


"The charge on a mole of electrons had been known for some time and is the constant called the Faraday. The best estimate of the value of a Faraday, according to the National Institute of Standards and Technology (NIST), is 96,485.3383 coulombs per mole of electrons. The best estimate of the charge on an electron based on modern experiments is 1.60217653 x 10-19 coulombs per electron. If you divide the charge on a mole of electrons by the charge on a single electron you obtain a value of Avogadro’s number of 6.02214154 x 1023 particles per mole."

https://www.scientificamerican.com/article/how-was-avogadros...


Here's how to estimate the size of an oil molecule just using water and an oil drop: https://spark.iop.org/estimating-size-molecule-using-oil-fil...

(They use some other stuff, but you get the idea)

You can back out Avogadro constant starting with this experiment.


Thank you. I do get the idea. And now I can go back and finish High School chemistry. I appreciate you help in understanding this.


Why tardigrades are so hardy, how their biology is so different?

How immune system and medications work.

Why some plastics are recyclable and others are not.


I've been mulling making a youtube channel with ten minute videos on immunology; what would be a good starter video that might interest someone like you? I thought I'll do something about antibodies as drugs!


Thank you and please please do it and post a link here or send me an email (in profile)! For me the most interesting is the recognition/pattern matching aspect: how antibodies find what to attack and what to leave alone.


Most definitely one of the hardest questions to answer :) I'll take it up as a challenge!


If you're taking requests... distinguishing Self vs nonself...what we know and what we don't.


All plastics are recyclable in the sense that they can be repurposed, but thermoset polymers cannot be repurposed to be used the same way as it was originally formed. Typically they are chopped up and used as matrix materials for asphalt or other composites. In some cases they can use solvents to break down chains into smaller building blocks. However, colloquially thermoset polymers are considered non-recyclable, while thermoplastic polymers are considered recyclable.

The difference between thermoset and thermoplastic polymers has to do with irreversible chemical bonding during curing. With thermosets, you have chemical bonds between molecules preventing deformation, whereas with thermoplastics, you just have a viscous friction between molecules that varies with temperature. If you heat up a thermoplastic, that viscous friction goes away and the plastic can be remolded.


>Tardigrades: No comment.

>The immune system: The most awesome thing ever.

It's actually two systems. One called the innate that we have in common with most forms of complex life, and the adaptive immune system, something we've only seen manifested in jawed vertebrates.

The innate immune system is a set of cellular signals/behaviors that are triggered by cells being exposed to damage and or stress.

These responses are generalized. Just about anything odd can invoke them, so they are typically the first line of defense. These include things like alteration of permeability of the local extracellular matrix (swelling), formation of impermeable tissue barriers to isolate damage (cysting, compartmentalization), and setting up signaling molecule gradients that attract phagocytic/cytotoxic cells to the terminate anomalous cellular activity/clean up the place (macrophage attraction), and alteration of metabolic activity to generate thermal stress (fever).

The issue with the general immune system though, is that it's non-specificity and versatility makes it a bit like a sledgehammer in the context of a complex organism. It can do as much or more damage as it can do good, and it isn't that good at only eradicating the exact thing causing the issue without excessive collateral damage.

Enter the adaptive immune system. The adaptive immune system is composed of various cell lines, and organ systems all specialized into dealing with specific facets of an immune response, and mediated through a set of special cellular surface receptors.

The facets of the adaptive immune response are: antigen recognition, coordination, moderation, and memory.

The major cell lines are T and B cells. T cells are further broken down into cytotoxic T, and helper T cells.

The adaptive immune system starts with naive lymphocytes. These cells rapidly multiply randomizing the ever loving crap out the region of the genome dedicated to the MHC receptor. By doing this, it'll cause the receptor to fold in ways that will allow it to bind with certain types of antigen (think of it as the antigen's key fitting the Mac's lock.) This proves of new receptor generation is mediated by the Thymus. The thymus tests every new variant to see whether there is any sensitivity to proteins that may expressed in other parts of the body. If it finds that to be the case, itinduces that particular cell to suicide to prevent the proliferation of immune cell lines with a high chance of being prone to autoimmunity. Those that survive are allowed to move out into the lymphatic and circulatory systems to patrol for their particular antigen. Upon meeting it, a few things happen. First off, the immune cell can help kick off or amplify a general immune response. Secondly, signaling proteins are released to attract more leukocytes to the area. Third, an antigen bearing cell will migrate toward the Thymus to recruit more immune cells. Once an antigen presenting helper T cell binds with a compatible B or Cytotoxic T cell line, that cell line undergoes massive replication without further modifying it's receptor, and the helper T cell does likewise.

B cells will create and secrete antibodies. Small snippets of protein that will bind to and foul up the workings of the antigen to which the are sensitive.

Cytotoxic T cells will patrol for and engulf antigen it encounters, either breaking it down with a burst of oxidative substances, or if the antigen is detected being presented on a cellular membrane protein, and a helper T cell is near by to enable the response, a cytotoxic T cell can induce cell death of an antigen presenting cell, but with much greater specificity and numbers than the mechanism used by macrophages. Once the cell death takes place, the cell will either clean up the remains, or attract macrophages to do so while it heads off for the next target.

The cytotoxic T cells are handicapped in their destructive potential by the need for a nearby Helper T cell. B cells just shotgun anti-antigens into the blood stream.

>Medications: oh dear God askapharmacologist. The closest I have committed to memory, is that most pharmaceuticalsarea veryspecifically formulated chemical intended to be able to be absorbed without difficulty, being capable of making their way to a target area in the body, modified to an active form by enzymes in the body to do their thing before eventually getting degraded and excreted by more of the same.

Plastics: no comment.


Thanks for a detailed writeup, which seems for the most part in agreement with my understanding technically. But then you kinda tried to summarize janeways in a few paragraphs, and that means no one who doesn't already know immunology is going to find it helpful. In fact, Janeway itself tries to summarize their book in the first chapter and that chapter goes on for 50 pages iirc.

Some fundamental concepts that might fly over someone without a biology degree might include the absolutely fundamental requirement of protein binding for any biological process (akin to a transistors function for a computing device).

I'm thinking that step one of communicating the entire immune systems complexity is probably omission, of anything that's not absolutely required for comprehension of the basic concept. In that regard I'm not convinced there's any need for bringing up the innate system first (there's a reason we discovered it quite recently, perhaps?). Other details can also be similarly "omitted" for simplicity perhaps. What are your thoughts?


I haven't heard of Janeway's, and I apologize for the lack of proofreading. The immune system is one of the most fascinating things I've ever deep dived into. So I can get a bit ramble when the chance to ramble occurs.

There are so many odd facets of it with interesting implications; like did you know exposure to sex hormones actually contributes to the atrophy of the Thymus over time? This is posited to have some relation to the increased likelyhood of developing autoimmunity 0roblems as we get older.

Also, not all T cell lines undergo adverse selection in the Thymus. There is a smaller population of more autoimmune sensitive cells that develop and specialize in the extremities. It is theorized this is evolutionary selected for because there is a tradeoff between being able to develop to a wide variety of pathogens, and being free of auto immunity. So you keep a small group of possibly autoreactivr immunity cell lines just in case. This is theorized to explain the prevalence of autoimmunity issues in the extremities being relatively common.


Non-interactive zero knowledge proofs.

ZK proofs have a number of good explainers, mostly using graph colorings. Non-interactive versions, however, require quite a bit more than that explanation allows - and despite asking experts, I still haven't found a good, basic explanation.


Maybe the blog of Prof. Matthew Green of JHU is of use. Specifically, the two-part series about zero-knowledge proofs. Part II discusses non-interactive ZK proofs. Part I is really required to grasp the extension to non-interactive ZKP's, so you may need to read that first. https://blog.cryptographyengineering.com/2017/01/21/zero-kno...


I liked this PDF that starts with using modular arithmetic to prove knowledge of polynomial, using bilinear EC pairings to make it self and then, finally, encoding computations as polynomials: https://arxiv.org/pdf/1906.07221.pdf


A non handwavy explanation of how does evolution result in such complicated and carefully orchestrated mechanisms, way beyond our engineering capability. The computer analogy of genetic algorithms certainly doesn't explain much, they are not very effective, and if they were, we should be able to generate marvels of engineering with GAs and current computational power.


> complicated and carefully orchestrated mechanisms

Yeah, I'd definitely second that. How could evolution result so quickly in something as "rudimentary" as Chlorella (i.e. the simplest plant). https://en.wikipedia.org/wiki/Chlorella


A whole LOT of time, a whole lot of space.

So estimate 1 minor good random mutation per 10,000 population. Assume 1 major mutation per 1000 years, per 10,000 population.


That's essentially the same kind of explanation as 'god did it'.


I dont know if you are being serious, but if you are, Dawkins books are pretty good (The blind watchmaker,or the Ancestors tale).


I will have to check them out, but I remember being unimpressed with the table of contents. Looks like it will be more handwaving.


Ah OK, you are not being serious, fair enough. Magic god did it all.


I am being serious. I just purchased Dawkin's book "Blind Watchmaker". My point is no explanation should amount to "magic X did it" whether X is god, evolution, the earth spirit, aliens, etc. All the above are bad explanations.

It is not scientific to substitute one bad explanation for another. The scientific approach is to say we don't know, and then look for a good explanation.


I'll take a shot at trying to explain that to the best of my ability. I'm starting from the point where we have organisms and dna, since nobody knows how exactly abiogenesis happened yet.

First, two general points.

The most important thing to keep in mind is time. Life has had insane amounts of time. Billions of years. Beyond human comprehension amounts of time.

The second most important thing is that complicated != carefully orchestrated or optimal. Life is pretty cool, but it doesn't hold itself to a very high standard. It's the survival of the good enough, and is full of so many random hacks and poor design choices it's insane. Things get easier to accomplish when you lower the standard.

Now an attempt at an explanation.

Evolution by natural selection works on two principles. First, generation of diversity. Second, selective pressure.

DNA can and does mutate frequently. One important type of mutation is a duplication, since it let's you gain new functionality. You make two versions of the same gene, one keeps it's original function, and the other does some new function. This theme of repurposing existing things comes up again and again. Take something you have, make another version of it, change it a bit. If you've worked out how to grow a vertebra as a lizard and want to become a snake, turn off legs amd make more vertebra. Use the same genes, and just modify how you control them. This video (https://youtu.be/ydqReeTV_vk) is actually pretty good at running through the science behind evolutionary development, and how evolution can quickly reuse and modify existing parts. Basically, once smaller features evolve, you start modifying them in a modular way and can start making really big changes really easily. Keep in mind again that you are helped by enormous, mind boggling amounts of time randomly generating these features originally as well. Heres a summary of how our eyes evolved by repurposing neurons (https://youtu.be/ygdE93SdBCY). Our eye is a good example of how the standards are just good enough, we have veins and nerves on top of our light detecting cells instead of behind and just poke a hole to get them through to the other side. Doesn't that leave a blind spot? Yep, and we just hallucinate something to fill in the space. There are a couple other major ways we generate more diversity. You have things like viruses transferring DNA, but a really powerful one is sex. Sexual reproduction lets you combine and generate new combinations of Gene's to speed up how quickly diversification happens.

For selective pressure, think about it purely statistically. You have sets of arrangements of atoms, some of which are good at making new sets of arrangements of atoms that look like then, and others less so. Each tick of the clock, versions that are able to make more increase, and versions that dont decrease. This basically provides a directionality for evolution - whatever is good at replicating is successful. This weeds out mutations that dont over time while keeping mutations that do. This means the next round of mutations are building on ones that were good enough, and not ones that weren't. This lets evolution be more cumulative than a random search.

Neither of those are complete descriptions by any stretch, I'm just trying to give you a taste of the mechanisms behind it, but it goes a lot deeper. The most important things do just boil back down to what I started with. Survival of the good enough - lower your standards. And there's just so much time for these to happen. Evolutionary step only has a 1 in a million chance to happen in a given year? Then it's happened about 65 times since dinosaurs went extinct.


The more I learn about the genetic code, how proteins get translated from DNA, the nature of DNA itself, bioinformatics, biochemistry, etc., the less like a random jumble it seems.

I also wonder why we cannot reproduce evolution's effectiveness computationally. Genetic algorithms and the like are not very good, no where near capable of matching biological systems, even though we can match evolution timescales and populations with our computing power today.

As far as I can tell, we have absolutely no idea why and how evolution works, let alone works so well.


Mach's principle. Why is there a "preferred" rotational frame of reference in the universe? Or as stated in this Wikipedia article,

"You are standing in a field looking at the stars. Your arms are resting freely at your side, and you see that the distant stars are not moving. Now start spinning. The stars are whirling around you and your arms are pulled away from your body. Why should your arms be pulled away when the stars are whirling? Why should they be dangling freely when the stars don't move?"

https://en.wikipedia.org/wiki/Mach%27s_principle


The two most obvious solutions to the thought experiment presented are either 1) space is absolute in some way (i.e. the classical Newtonian response) or 2) the behavior of space "here" is affected by the by distribution of matter "over there". General relativity gives us a strong argument in favor of (2) by showing that a) many physical principles thought to be absolute are actually relative and b) showing that mass "over there" affects the shape of space "here".

To say anything more concrete requires requires defining the question much more precisely. I believe there is still some disagreement on the interpretation of Mach's principle in light of general relativity. For example, see https://en.wikipedia.org/wiki/Mach's_principle#Variations_in... (and a couple sections above, the 1993 poll of physicists asking: "Is general relativity with appropriate boundary conditions of closure of some kind very Machian?"

I hope that is helpful in some way.


The unsatisfying mathematical answer is that it is impossible to have a uniform distribution of rotational speeds, therefore there must be a preferred one.

It's the same reason the universe has an average speed (unlike what you might expect from special relativity), although it is unclear if this is true for the entire universe or just the portion we can see. We can measure how fast we're moving w.r.t the cosmic microwave background radiation though (it is red-/blue-shifted in a particular direction).


This is an interesting argument! Wouldn't it also work for positions, though? That is, either the universe is finite, or, since there can't be a uniform distribution over an infinite space of positions, there must be some preferred "center" of the universe?


You'd think, but of course we know that not to be the case. It's hard to pinpoint the exact reason though. Sure we know time and space are rather special, but its hard to say exactly why.

In the end though I reckon the most obvious reason is that speed is a property that directly corresponds to energy, therefore for each region of space to have a well defined energy (which is required for e.g. general relativity) every region of space needs to have a well defined distribution of speeds.

I suppose this does leave open a small loophole, as you can easily correlate speed with position in order to get a distribution that is uniform in both (but correlated). But this goes against our assumption that the universe is uniform everywhere (which might turn out to be false, but so far it's holding up well).


Asynchronous programming

With the addition of async to django core, I felt its time to finally learn the concept. I first took interest in async early last year when I re-read a medium post on Japronto; an async python web framework that claims to be faster than Go and Node.

Since then, I've been on the lookout for introductory posts about async but all I see is snippets from the docs with little or no modifications and a lame (or maybe I'm too dumb) attempt at explaining it.

I picked up multi threaded programming few weeks ago and I understand (correct me if I'm wrong) it does have similarities with asynchronous programming, but I just don't see where async fits in the puzzle.


The first couple of paragraphs of the documentation for asyncore, the module in Python's standard library that implemented the machinery for async IO all the way back in 2000, has a great description of what async programming is all about. Here it is:

https://python.readthedocs.io/en/latest/library/asyncore.htm...

'There are only two ways to have a program on a single processor do “more than one thing at a time.” Multi-threaded programming is the simplest and most popular way to do it, but there is another very different technique, that lets you have nearly all the advantages of multi-threading, without actually using multiple threads. It’s really only practical if your program is largely I/O bound. If your program is processor bound, then pre-emptive scheduled threads are probably what you really need. Network servers are rarely processor bound, however.'

'If your operating system supports the select() system call in its I/O library (and nearly all do), then you can use it to juggle multiple communication channels at once; doing other work while your I/O is taking place in the “background.” ...'


"async" is BS. You can't smear concurrency over your systems like Nutella.

Read "Communicating Sequential Processes" by Tony Hoare https://www.cs.cmu.edu/~crary/819-f09/Hoare78.pdf

There's also a book: http://www.usingcsp.com/

See also https://en.wikipedia.org/wiki/Process_calculus


Someone probably much better than me can correct me where I go wrong here, but I'll have a stab at this.

The way I think about it, is asynchronous programming gives you the tools to write programs that don't stop doing useful work while they're waiting for something to happen. If parallelism gives you more effective use of your CPU, asynchronous programming gives you more effective use of your time. Let's presume you have a program that does some things, makes several requests to the network or requests several things from the file system, collects the results and carries on.

In a synchronous program, you would make each request, wait for it to come back (the program would block at this point), then when it's complete, proceed with the next request. If each request takes ~2 seconds to complete, and you've got 15 to make, you've spent most of that 30 seconds just idling, not actually doing anything.

In an asynchronous program, you could submit those requests all at once, and then process them as they came back, which means you only spend about ~2 seconds waiting before you start doing useful work processing the results. Even if your program is single threaded and you can only actually process one item at a time, you've made more efficient use of your time.

Some murkiness comes in the intersection of the 2 and how it's implemented in various languages. For example, you could also dispatch each of those requests out to a thread, and if you returned all the results to the main thread before processing them you'd have the same result and near the same performance as the async example (+- thread dispatch overhead etc etc). The power and advantage comes when you can use both to their advantage: you can't necessarily dispatch threads forever, because the overhead will impact you, and you can saturate your CPU. On the flip side, making something asynchronous that actually requires CPU work won't net any benefits because the work still has to be done at some point. Asynchronous programming gives you a way to move things around to maximise your efficiency, it doesn't actually make you go faster.

JS and Python are single threaded with event-loops, Rust organises chains/graphs of async code into state machines at compile time and then lets the user decide exactly how it should be run (I'm fairly this is correct, but if I'm wrong someone let me know). Dotnet to the best of my knowledge lets your write "tasks" which are usually threads behind the scenes (someone please correct me here). I don't know what Java uses, but I imagine there's a few options to choose from. Haskell performs magic as far as I can tell. I don't know how it's model works, but I did once come across a library that appeared to let you write code and it would automatically figure out when something could be async, rearrange calls to make use of batching, automatically cache and reuse similar requests and just generally perform all kinds of Haskell wizardry.


The best way to earn in my opinion is to start with small simple projects and build up from there. It can be a strange concept to get used to if you have a longtime background in synchronous programming (like I had too). I finally wrapped my head around it when picturing the programming flow as getting another direction - perpendicular to the normal vertical flow (I prefer to think in terms of geometry ...).

Japronto does not seem to be under active development any more, but async programming is definitely the way to go in order to squeeze the most performance out of the hardware at ones disposal.

I put down some thought round this that tracks my own journey to understanding the concept (sorry if this is too basic for you, take it or leave it, and please note that I'm not an expert by a long shot).

It's not guaranteed that you have the same way of picturing things, but here goes: programs normally run in one direction, executing one line at the time from top to bottom (vertically). But one or more of those 'vertical' commands may send the computer off in a horizontal direction too (async calls), that have a 'horizontal' chain of commands.

The problem that I (and I think many with me) have had a hard time grokking at first is that the 'vertical' flows continue immediately after having issued a 'horizontal'(async) call. The computer doesn't wait for the async call to come back. To do something after the async call has finished you have to tack a new call onto the result of the async call in the 'horizontal' chain of events, previously often leading to what was called 'callback hell' in Nodejs programming.

Not sure about PHP but one may get round the problem of callback hell in the JavaScript world by using async/await and promises which mimics synchronous programming, i.e. program flow in the 'vertical' direction is actually halted until the async calls return a result. Personally I find that this adds another level of abstraction that sometimes may make things even more difficult to understand and debug. I prefer wrapping async calls in some queue construct instead (which takes care of the chaining of consecutive async calls), works for me.

In short, synchronous commands are automatically 'chained' top to bottom in code, asynchronous commands have to be chained manually after the completion of each async bloc of code. I believe multi-threaded process programming is just a more advanced case of async calls that often need to be 'orchestrated', i.e. coordinated in a way that simple async calls usually don't need. But all types of async programming comes with some special issues, of which race-conditions is maybe the most common, i.e. when several async processes are trying to change the value of a shared asset in an ad-hoc manner.


[Sorry I have misinterpreted the subject, I thought it could be some phenomenon still not well understood by science.]

Placebo. There are biological bases of it (I don't believe in soul). Find these bases, study them, make a model of them. Then use proxy variables to measure it instead of trying to eliminate it statistically. Predict it in studies to avoid the need of placebo groups (and possibly of double blind methodology). Also, after it is completely measurable and its mechanisms are understood, if (very hypothetical) it has a really substantial effect, just use it to help treat patients.


What actually happens to the money that gets put into the stock market? I mean, I understand that there are people holding stocks and then people offer to buy/sell them at a given price; but, how does the company plan to benefit if the stock just keeps moving back and forth multiple third parties? Doesn't a company go public to raise money? If so, how does the company benefit from the daily changes in the stock and when someone buys stock in the company? Does the company have a fixed number of stock that they trade as well on the market?


Company raises money from market by giving away part of the company.

If company shares are cheaper than other source of money, then company can buy back them, or other potential owner can buy them. It also sends strong signal to owners and investors about future of the company.

If company shares are costing more than other sources of money, then company can sell more of them to return costly money back or to expand business.

For example, bank rent rate is 10% per year. Company worth is $1 million. Company created 1 million of shares. 1 share is $1 of company worth. Company pure profit is 10% per year. Company needs $1 million in circulation to operate.

If company will rent money from bank, it will have 10%-10%=0% of profit. If company will sell 100% of shares, it will have 10%-0%=10% of profit, but this profit will go to someone else. If company will sell 50% of shares and will rent $0.5 million from bank, it will have 10%-5%=5% of profit, 2.5% will go to company, 2.5% will go to other owners. By reinvesting this profit, company can reduce bank money or expand company to worth more, increasing company profit share.


The one-electron universe is always a personal favorite. Though more a far-fetched theory than a proper "scientific phenomena", I'd be eager to learn more about it in layman's terms

https://en.wikipedia.org/wiki/One-electron_universe

https://www.youtube.com/watch?v=9dqtW9MslFk


vertical alignment in CSS


this made me laugh!


vertical-align: center;

/s


Automatic differentiation. It's useful to so much computational work, but most people only get a cursory introduction to the topic (a rough intro to the minimum they need to know), whereas really understanding it seems to open up a lot of research.


Oh man, I read a super cool article about that about a year ago. It provided an algorithm for automatic differentiation using an imaginary number such that it times itself equaled 0, but wasn't equal to zero itself. I'll try to find the link.

I don't know if this was it, but an explanation nonetheless https://medium.com/@omaraflak/automatic-differentiation-4d26...


I'm sure there are great explanations out there but I haven't had time to read up on them, but a few of the space things that always bother me when I watch pop-sci space tv:

- "as soon as iron starts to be produced in the core of a star it instantly collapses" - I get that fusing iron costs energy rather than produces it and this causes a collapse.. but can it really be that quick? There are other fusion reactions that are still producing energy, right?

- dark matter / energy - I understand we have observations that indicate there is some type of matter we can't see but it feels a lot like saying "magic" or "the ether".

- how different size stars form - if there is a critical mass where a star "ignites" and after igniting starts pushing away from itself with the energy being produced, how do we get stars of such varying masses? Like, why didn't this 100x solar mass star start fusing and pushing the gases away before they were caught in its gravity? Do the more massive stars ignite on the same schedule but continue to suck in additional matter anyway, gravity overcoming the solar wind?


Not an expert, so take these with a grain of salt:

The core of the star is the hottest and most dense part. Greater heat and density make it easier for fusion reactions to run. If suddenly the core is made mostly of iron, then the amount of energy it produces rapidly drops. Even if there are nice, easily fusible hydrogen atoms farther out from the core, they will not be fusing at a very high rate, because the temperature and pressure is lower where they are. Also, the more easily fusible atoms remaining outside the core can't diffuse into the core fast enough to refuel it. The only possible outcome is collapse.

In some sense "dark matter" and "dark energy" are just placeholder words for "whatever thing is causing all this weird stuff to happen". This is actually very analogous to how "the ether" was a placeholder term for "whatever thing that radio waves are waves in". (Now we refer to it as "the electromagnetic field". The "ether" terminology was associated with some incorrect assumptions, such as a privileged reference frame, which is why people sometimes say it was an incorrect hypothesis. But the electromagnetic field is certainly real, it just didn't turn out to work like some people thought it did.) Scientists have observed so far the dark matter seems to behave pretty much like ordinary matter, except that it just happens to ignore the electromagnetic and strong nuclear forces. Not only does it hold galaxies together, but its gravity also bends the paths of light rays, just as we expect of anything massive. So calling it "matter" isn't too much of a stretch. It's still very mysterious, though.

Radiation pressure actually does limit the mass of stars, to something on the order of 100 to 200 solar masses, see this stack exchange question: https://astronomy.stackexchange.com/questions/328/is-there-a... That doesn't stop smaller clouds of gas from collapsing to form smaller stars, though.


Thank you for even trying to answer my rambles! :-)

I think my contention with the iron is the tipping point and how quickly it goes. Pop-sci tv makes it seem like you fused a single iron atom and bam. Maybe it is you fused an iron atom and it is like a day, a year or a thousand years and that adds up; still bam in terms of cosmic timelines but it is not what I hear when I listen and hear "instant collapse".

Thank you for the thoughts on dark matter and energy as well, and the link on radiation pressure, I will read it.


When I hear explanations like “space is expanding like the surface of a balloon” it’s always confusing. Because a surface is an object, separate from anything on it, but space is the thing we’re all embedded in, so we’re like drawings on the balloon.

If space is expanding why aren’t the radii of fundamental particles and their orbits and molecules also expanding? And if that were the case we couldn’t notice space expanding.


> If space is expanding why aren’t the radii of fundamental particles and their orbits and molecules also expanding? And if that we’re the case we couldn’t notice space expanding.

> Does space only expand somewhere else? Only between me and the Andromeda galaxy, and not _within_ me and the Andromeda galaxy? How would it know to do that?

If you start with expanding space in general relativity, and then carefully take the limit where you get back to Newtonian gravity, then it just corresponds to a classical force, specifically a very tiny force that weakly pulls everything apart, growing with distance.

This doesn't expand small objects, because they're rigid. It's the same reason that I can't make my laptop get bigger by gently pulling on the ends. On the other hand, it would pull apart two independent, noninteracting objects (such as the Milky Way and Andromeda).


On top of that, FLRW spacetime is a large-scale approximation: More realistic models should probably follow the 'swiss-cheese' approach, where local conditions can look rather different.


So molecules are like bits of glitter stuck to the surface of the balloon?


Antenna Design/RF Theory. I would love a simple text/YouTube series to learn this stuff...would definitely be good learning for the summer :)


I second this. I once worked with an assistant professor to do some of that, and in the end a lot of what we did boiled down to trial and error: do a design, run simulations in some software, and if it doesn't look good try again. It makes me sad that I didn't really encounter anyone capable of explaining things beyond trial-and-error.


* Lie algebras and Lie groups - I still don't understand this, what they're used for or how to use them in any practical sense.

* Galois Theory - I have a basic understanding of abstract algebra but for some reason Galois theory confounds me, especially as it relates to the inability of radical solutions to fifth and higher degree polynomials

* "State-of-the-art" Quantum Entanglement experiments and their purported success in closing all loopholes

* Babai's proof on graph isomorphism being (almost/effectively) in P - specifically how it might relate to other areas of group actions etc.

* Low density parity checks and other algorithms for reaching the Shannon entropy limit for communication over noisy channels

* Hash functions and their success as one-way(ish)/trapdoor(ish) functions - is SHA-2 believed to be secure because a lot of people threw stuff at the wall to see what stuck or is there a theoretical backpinning that allows people to design these hashes with some degree of certainty that they are irreversible?


I think the best motivation for the theory of Lie groups and Lie algebras is representation theory. Like you don't need anything besides linear algebra to know what SO_n is, but if you want to know about how it can act on a vector space then you need to think about the Lie algebra.

The other great thing about Lie groups is you can discover new and valuable groups just from pretty basic topology. Like the Spin group, which you know has to be out there as soon as you know the fundamental group of SO_n, but otherwise would be very hard to think of.

The fancypants but I think most intuitive way to think about Galois theory is also with topology. It's an algebraic version of a much more geometric, visible story, the correspondence between {subgroups of the fundamental group} and {normal covering spaces}.


Probably the best place to get started with Lie groups is to wrap your head around SO(n), which has both a nice geometric interpretation as rotations of n-dimensional space as well as a concrete representation as the space of orthogonal (determinant 1) nxn matrices. With a little matrix calculus you can work out what the tangent directions are at the identity matrix: they’re precisely the skew-symmetric matrices. This is the Lie algebra so(n). Where the Lie group consists of rotations, the Lie algebra consists of directions in which you can rotate something, or really velocities of rotation. This is why classical angular momentum is actually an element of a Lie algebra.


I’m sorry but you used so many big words so close together, I could find no purchase with the things I know. Perhaps you could tell me what was the problem that Lie groups are supposed to solve. I read something about an infinitely symmetric transformation- the rotation of a circle is a Lie group. But then what are all the letters about- the A,B,C etc all the way up to E 8. And why did it take 77 hours on the Sage supercomputer to solve E8.


Light, a mixture of why a thing is the colour that it is? As well as why reflection would be angle of incidence is angle of exit.

My current understanding of colour is that the colour of an object is defined by the ability of the electrons in the compound jump different energy levels. I don't know if that in itself is enough to result in all the colour we see.

My current understanding of reflection is that because of wavyness of light when lots of light gets absorbed (to my understanding a single photon exciting a single electron to jump some amount) and reemited (the electron falling back down) together the light ends up forming that angle pattern. Under than understanding single photons don't bounce in the same way rays of light do?

I don't know how correct either of those understandings are, but my understanding has been put together from so many places and I've never heard any source explain either like that so I don't trust they are correct.


You are mixing a lot of different things:

* The reflection angle laws are due to the laws of conservation, see https://en.wikipedia.org/wiki/Snell%27s_law

* For a pure colour, the colour is simply the energy of the photons. Atoms have discrete stable electron orbits, and electrons moving between these levels will absorb or emit discrete levels of energy in the form of photons, which is why we have spectral lines. Reality is more complicated because part of the energy may be converted to vibrations of the atom itself (phonons).

* Another factor is the perception of colour. In physics to characterize light one measures its spectra, the intensity of the light versus its wavelength (wavelength = speed of light in vacuum / frequency). The perception of colour of these distributions isn't always always what one would expect.


The connection between elementary particles and group representations. I get the math, I just don't see in what way it corresponds to reality (i.e. in addition to being a bijection).


Wormhole time machines.

The idea is, you can transform a normal [0] wormhole that isn’t a time machine into one which is by:

1) accelerating one end to high speed relative to the other

2) keeping on end in a lower gravitational potential than the other

Why are either of these considered meaningful statements, never mind correct?

In the case of 2 in particular, isn’t GR supposed to require smooth values? So any time dilation effect would be almost identical on a pair of points +δ and -δ from the throat? Making it similar to the case of a gravitational potential without a wormhole?

And in the case of 1, the more I think about it, the less I understand the concept. What is being moved? An imaginary clock that would’ve been in the part of the wormhole at the far end? The apparent speed as measured going through the throat will be zero regardless of the apparent speed of the same as measured when going the long way around.

[0] yes, I know


I'll answer 2. You are correct that clocks just to either side of the throat will stay synchronized. But it's important to remember that time dilation isn't some kind of absolute effect. The easiest way to talk about it is in terms of clocks following paths.

Imagine you have two spaceships in London. At the fire of a starting gun, they both take off, and fly around, each one taking a different path. Each ship has a clock that records the time elapsed since it took off. Eventually, they both return to the same landing pad in New York, landing at the same time. Thanks to time dilation, the readings on the clocks of the two ships might be different, even though they started and landed at the same places and times. Imagine we start both ships from a space station in deep space. One ship doesn't leave the station at all, it just stays in its docking bay, with its clock ticking along. The other flies down to the surface of Earth, sits there for a few years, and then flies back up to the space station. Thanks to the gravitational field of Earth, the ship that stayed home has more time elapsed on its clock than the ship that went to Earth.

Now suppose that each ship is carrying one end of a wormhole. The clocks on either end of the wormhole must stay synchronized. Someone sitting in the middle of the wormhole would be able to see the inside of one ship by looking to their left, and the inside of other ship by looking to their right. The clocks start out synchronized. As you point out, no matter how the ships move about, this does not change as the ships fly around. Anyone standing in the middle of the wormhole always sees that the clocks on the wall of each ship are in sync.

So: Entering the wormhole from ship A when ship A's clock reads X means you exit at ship B, at the time when ship B's clock reads X. And vice-versa for going from B to A. Now thanks to time dilation, ship B might arrive back at the station when its clock reads X, while ship A, which stayed behind, has a clock reading of (X + 20 minutes). If you are the station master, you can go into each ship to look at the clocks, and you will find that ship A's clock is always 20 minutes ahead of ship B's. But suppose that instead of walking between the ships through the station, you use the handy wormhole that connects them directly.

Suppose you enter ship B when its clock reads Y, and walk through the wormhole. You exit at ship A when its clock also reads Y. Then you step out of ship A, and walk through the station to ship B. It's clock reads (Y - 20 minutes), since according to people on the station, ship A's clock is still 20 minutes ahead of ship B's. When you originally entered ship B, its clock read Y. It now reads (Y - 20 minutes). Time travel. By retracing your path in the opposite direction, you can also travel 20 minutes into the future.


That helps, thanks.

It doesn’t fully solve my confusion, but I suspect I need to study more before I can even ask the right next question. :)


How can metric expansion of space redshift light? Doesn't that violate conservation of momentum, since the redder light has lower momentum?

And adding on to that: Will light inside a box redshift? If I weigh the box (i.e. weigh the light inside the box), then wait a bit for the light to redshift, then weigh the box again?


You are correct! Cosmological redshift does violate conservation of momentum (and energy as well [1]). But conservation of energy and momentum does not actually apply if spacetime itself is globally changing.

The underlying reason for this is that Noether's theorem tells us that every physical symmetry implies a conservation law for some physical quantity. Conservation of energy and momentum comes from the fact that the physical laws are the same throughout time and space. However, cosmological expansion violates that assumption, so there is no reason that energy and momentum should still be conserved. [2]

[1]: One side note here is that relativistically, energy and momentum are not really separate physical quantities, but instead two components of the same underlying physical quantity. Unfortuantely, this quantity does not really have a good name (despite Taylor & Wheeler's attempt to call it "momenergy"). It ends up being called the momentum 4-vector, but the temporal component of this 4-vector is energy.

[2]: This is only true globally. Locally, the laws are approximately the same from one moment to the next, so conservation of energy and momentum hold for small distances and short times.


Flight. How can a plane fly when it's thrust to weight ratio is less than one? It's like, if you can produce 10 pounds of thrust, who would look at that and say "ah ha, we can use this to keep a 100 pound machine miles in the air indefinitely"?

I understand flight from a mathematical point of view. I've actually read a few books on the subject, and I could explain how flight works to someone. However, I'm still fishing for an explanation that "feels" more satisfying though. Per the question, I still want it explained better.

EDIT: There's already a thread about flight. I asked the same question there, but phrased a bit differently: https://news.ycombinator.com/item?id=22993460


Consider the vectors of the four forces (thrust, drag, lift, weight). The thrust only needs to equal the drag at a given airspeed, not the weight of the vehicle. (Unless it's climbing vertically.) The weight is countered by the lift generated by the wings, which, given the efficiency of airfoils, is very more or less why it all works: they produce a positive lift-to-drag ratio.

Put another way, weight pulls it down, thrust moves it forward, the resultant lift keeps it up, and drag limits its speed. Only rocketeers and fighter/aerobatic pilots need to really worry about the thrust to weight ratio as a constraining factor, because the vertical flight regime matters to them. From your average bugsmasher to your commercial airliner, it's not a factor (to the disappointment of pilots everywhere).

Consider that a Cessna 172 has a glide ratio of about 9:1, so it can go 9 units forward for every 9 of altitude it gives up. If that's hard to intuitively grasp, consider that it's traveling through a fluid. Surfing, even. The interaction with that fluid is why it works.

That any more satisfying?


So the 'glide ratio' is essentially a multiplier from 'thrust' to 'lift'?


Sorry, I was unclear. That glide ratio is unpowered (and approximate). Any thrust will only increase that ratio.


The 10 pounds trust is used to overcome the airodynamic friction, when the plane is in high speed. Because the "way" around the wing is longer on one side compared to the other this create a lower pressure on one side, and higher on the other. This pressure both lift and push the plane up. If there was zero friction you would hold the plane up with zero thrust ;)

I build this at school, using the same principle: https://no.wikipedia.org/wiki/Sivilingeni%C3%B8r#/media/Fil:...


The problem with this description is that you don't need a fancy shaped wing to fly, a flat board will work. Aerofoils provide better efficiency but are not required.


Don’t agree, unless you only did read the first 1/4 of my comment?


I did, but if I use an extremely thin wing with zero aerofoil (a wing made out of credit-cards or flat balsa wood for example), one "way" around the wing is not longer. Why would pressure be different on the top vs. bottom of this flat wing?



Think of it like a wedge. 10 pounds force on the back of a wedge can easily cause 100 pounds force perpendicular to it.

But there is a tradeoff between force and displacement. Larger force = smaller displacement.

Same with a wing. The thrust force is lower than the lift force, but the horizontal displacement (velocity) of the wing is much greater than the vertical displacement (velocity) of the deflected air.

i.e. small force x large velocity = large force x small velocity


Many descriptions of flight focus on pressure differentials. I think it's much easier to think about in terms of Newton's 3rd law. The plane simply needs to redirect enough air downward to compensate for its weight. The energy cost change the direction of the air (via the wing) is very low (similar to how a train does not lose much speed when the tracks curve).


I wish I could get a clear explanation of why faster than light travel breaks causality. I have seen the math but intuitively I am having trouble grasping it especially when you have only two reference frames (most explanations will use three reference frames to show the violation)


The simplest answer is that events that happen further apart in space than in time can be observed in different orders depending on the frame of reference. If an effect can happen before its cause, causality is violated.

The three-reference frame example is the easiest, because you can start with a frame where two events, A and B, happen simulateously. A reference frame (say, a spaceship), flying along a line in the A to B direction will observe A happen, then B. A ship flying the opposite direction will experience the opposite, B then A.

So whose observations were correct? All them are perfectly valid. The problem is if we allow A to cause B, in which case the B then A frame has the effect happen before the cause.


Isn’t cause and effect mediated by forces that travel at the speed of light? Simultaneous events cannot be in a causal relationship, no?


Absorption spectra baffle me and I've never seen an explanation that helps me understand.

It seems conceptually simple, except the requirement that the energy of an photon exactly match some required energy in order to be absorbed seems really unlikely, since photon energy not a discrete quantity, and varies according to doppler effects and other things.

It seems like the vast majority of photons would just fly through the universe without interacting with anything, unless there are other ways for photons to interact with matter besides being absorbed. (If there are other ways, they are seemingly never mentioned as a potential alternative fate for the photon).


> It seems like the vast majority of photons would just fly through the universe without interacting with anything, unless there are other ways for photons to interact with matter besides being absorbed.

Maybe someone who actually knows will chime in, but afaik:

- Most light doesn't have a fixed frequency. If it did, it would have a fixed momentum, but then you would have no idea where it is! Instead it is some superposition of many frequencies. That could be part of the story.

- Light could be stopped by something other than absorption into an individual atom. Metals don't have discrete spectra.


The match between the energy of the photon and the atom orbits doesn't have to be perfect. Part of the energy may be added or removed as vibrations (phonons).


Another frustrating one - what is heredity? If it's possible to inherit something due to a shift in behavior (i.e. it's a cultural change that leads to a biochemical change), how does that connect neatly to mendalian inheritance?


Are you talking epigenetics? It's relatively closely related to mendelian inheritance. At a high level view, if genes are present in your genome that aren't "on" in a previous generation, but due to changes in behavior become more active, then in a future generation that gene could be expressed more. It's not that the genes have changed their code, they are just flipped on or off.


No, more things that are modifications without any genetic basis - like myopia. Doesn't seem to have any genetic/epigenetic basis. It's purely a physical use case, where the eye matures improperly if it's only exposed to near-field work.


One hypothesis: kids learn to do what their parents do. If near field work or hobbies cause the problem then it follows that some kids will "inherit" it through mimicry.


ah, I see what you're saying. Luckily, I've looked into this specific topic!

Myopia is definitely hereditary, especially the pathologic variants that can lead to retinal tears and the like.

That being said, there is the process of refractive development that occurs early on in life. The eye develops at a frighteningly fast pace, and you achieve near-adult globe size after about 18 months. The genes that drive this refractive development could be hereditary, if that is what you're trying to figure out.

Now, we can make a claim that this adaptation during infancy could eventually affect our genome, but I have not delved into the epigenetic literature to determine if that has been borne out or not.


90% saturation in a few generations suggests that it is something other than purely hereditary mutations accumulating: https://www.nature.com/news/the-myopia-boom-1.17120


I know, I'm not saying that these are hereditary mutations. I'm saying that genes in all of us are involved in refractive development, and thus have to be inherited if mutations exist. I'm saying that our environment of myopic environments is feedback that forces our eyes to develop to support better focus at those distances.


Quantum Mechanics, and why we need interpretations of it.


Why Quantum Mechanics I have no idea.

Why interpretations: There is an experiment you can do that is hard to explain: Either particles are able to somehow influence each other faster than light (non-local), or the particle somehow doesn't exist except when interacting with some other particle (non-real).

Try this video: https://www.youtube.com/watch?v=zcqZHYo7ONs the AHA moment in the video comes when you realize you can entangle the light and that adding a filter by one stream of light somehow causes the other stream of light to also be influenced.


There are two broad reasons for having interpretations. First, because the results contradict our intuitions, and there's no agreement on which ones are sensible to abandon. Second, because there are experiments that we can't perform now (and maybe ever), so there's room for different hypotheses. This would lead to different theories, not just interpretations, but often these ideas are lumped together.


the problem with quantum mechanics is that we cannot directly observe what is going on (and the instruments we use to do the observations are also subject to quantum mechanics effects), so to explain certain phenomena we have come up with different theories that approximate what is going on. These theories work for certain cases but nobody has come up with a comprehensive theory that can be experimentally tested and works for all cases.

Here is a few books you can read on the subject. They do a pretty good job on describing what the issue is and what the interpretations mean:

Max Tegmark - Our Mathematical Universe

Sean M. Carroll - Something Deeply Hidden

Adam Becker - What is real?

Here are some things you can google if you want to just skim the subject: Wave–particle duality, The Measurement Problem, Quantum decoherence, Copenhagen interpretation, Bell's theorem, Superdeterminism, Many-worlds interpretation, Ghirardi–Rimini–Weber theory (GRW).

Last but not least, look at the Wolfram Physics Project. (https://wolframphysics.org). The take on quantum mechanics if you go along with the idea of hyper-graph is fascinating (to me)


I have tried to grok it multiple times but escapes my feeble mind. I have developed some intuition, but not sure it's quite right. Hopefully someone can correct me:

1. Subatomic Matter is by default both mass and a wave, but when "observed" it becomes a particle as we know it i.e. with mass.

2. Atomic bonds are formed due to electrons (waves) being shared between adjacent atoms.

Hope I have some parts correct. Perhaps someone can shed some photons.


Like.... why are sub-atomic phenomena important in general?


To add to all this, the double-slit experiment. What exactly does it mean that light moves as a wave or a particle?


>What exactly does it mean that light moves as a wave or a particle?

One idea is known as the Copenhagen interpretation.

It basically says that the wave-like effects we associate with matter is merely a wave of probabilities. Or in terms of the double-slit experiment and in other words, light behaves like a particle, but the wave-like effects you see is just the result of probabilities where the particles end up. Dark areas are areas of low probability, and lighter areas high probability.

One might imagine the light particles streaming through the slit end up having slight variation in trajectory from one particle to another (for various reasons such as interference with other particles), which results in areas where most particles end up and others where few end up... representing a wave.


Wave function collapse. As far as I can tell there is no discernable difference between a particle whose wave function collapsed (perhaps via measurement with another entangled one) and one that hasn't.


This is super naive but I just think of it as you (or whatever other particle is interacting with the one of interest) have now become part of its system. Like a causal event cone reaching you. Nothing about the particle changed, just your relationship with it (hence locking down the uncertain parameters).

I'm sure you or another physicist could point out the flaws in my mental model.


You can always subscribe to the many worlds interpretation and stop worrying about it.


Sort of trying to piggy-back on this thread in hopes someone with knowledge will be able to share it.

I've been having conversations about viruses recently and in those conversations / thought experiments I keep coming back to a point someone made to me.

Someone this person knows, with extensive medical expertise, explained that the "membrane" of the cell contains a ridiculously large number of unique types of proteins.

Understanding, in vague terms, how viruses penetrate cells the question I pose is "is this true because each of those proteins has a unique and distinct function in the cell membrane? Or is it more a matter of scale and utility?" In other words, does the observation simply indicate that our bodies are not as perfect as we'd like to think they are and the body's process for creating / repairing cells is more a utilitarian function where the "rules" of cell construction are extremely flexible such that these molecules are constructed in various ways where our cells are using materials available to them at the time?

If this is the case it starts to make a lot of sense to me at a molecular level why certain people tend to be more susceptible to contracting certain diseases. Could a lot of it really just come down to diet, along with probably a hint (or more) of DNA's interaction with those proteins we're providing to our bodies? And to what extent does each of those play a role? DNA and the proteins.


Time dilation with regards to the "no absolute rest frame" in physics.

The famous twin thought experiment where one gets in a spaceship, accelerates away from the planet, turns around, and comes back.

The twin that stayed on earth is old and the traveling twin is young still.

On one hand, I know that time will "pass differently" for each twin....but why is it the twin in the spaceship that ages less? Why isn't it true that the entire universe accelerated away from the spaceship and then returned, leaving the entire earth young?


You're in luck my friend, the Wikipedia article does a rather nice job of describing just this "paradox": https://en.wikipedia.org/wiki/Twin_paradox

> In physics, the twin paradox is a thought experiment in special relativity involving identical twins, one of whom makes a journey into space in a high-speed rocket and returns home to find that the twin who remained on Earth has aged more. This result appears puzzling because each twin sees the other twin as moving, and so, according to an incorrect[1][2] and naive[3][4] application of time dilation and the principle of relativity, each should paradoxically find the other to have aged less. However, this scenario can be resolved within the standard framework of special relativity: the travelling twin's trajectory involves two different inertial frames, one for the outbound journey and one for the inbound journey, and so there is no symmetry between the spacetime paths of the twins. Therefore, the twin paradox is not a paradox in the sense of a logical contradiction.

There's multiple explanations included to resolve the "paradox" from different lines of argument; I particularly like this one: https://en.wikipedia.org/wiki/Twin_paradox#A_non_space-time_...


Hahah 2 separate sources each for why I'm incorrect and naive. Thanks!


The rising cost and decreasing productivity of scientific research.


When two particles get closer, their mutual gravitational attraction increases. As the distance approaches zero, the force approaches infinity. In the limit of d -> 0, the energy released -> infinity. Obviously at some scale the notion of a point mass breaks down, but even quantum theory would be problematic if we think of a wave function as describing a probability distribution, wouldn't it? What's the "official" story on this?


The official way this is handled is called renormalization.

Basically, we just declare that we have no idea what is going on at such short distances, and put in some regulator by hand to get rid of the infinity. One very crude regulator (which nobody uses, but which is suitable for demonstration) would just be to say that particles are simply not allowed to get any closer than some fixed tiny distance.

But what about the effects that occur when particles actually do get that close? Well, in most theories, whatever is happening can be parametrized in terms of a few numbers (e.g. it could shift the observed mass of the particles, or their charge, etc.). Our ignorance of what is actually happening prevents us from computing these numbers from first principles. But we can still make scientific progress, because we can treat them as free parameters and measure them -- and after that measurement, we can use the values to crank out perfectly well-defined predictions.

Repeating this process through several layers was crucial to building the Standard Model, which currently has about 20 free parameters.


What follows is very hand-wavy, and the renormalization sibling post may touch on it.

An answer is that the d->0 approaches infinity presumes a nice, continuous analytic function. If d->epsilon, you can't get to that singularity.

There was an equivalent problem in the E/M space with "The Ultraviolet Catastrophe" [1], which turned out to go away if you assumed quantization.

I'm not going to claim this is a perfect analog to the gravity problem, only that a lot of physics doesn't quite work right when you assume continuity. (The Dirac delta is a humorous exception that proves the rule here, in that doing the mathematically weird thing actually is closer to how physics works, and it required "distribution theory" as a discipline to prove it sound.)

[1] https://en.wikipedia.org/wiki/Ultraviolet_catastrophe


This is definitely related, though not the whole story. Quantization did get rid of some infinities, but as GP kind of states, it also introduced others. My comment focuses on what we do for those.


> In the limit of d -> 0, the energy released -> infinity.

I believe the poster's general premise to be false. While renormalization may be useful in resolving infinities in general, I don't think it's necessary for this one.

You can't commute the dp*dx of a Hamiltonian to be zero in a quantized world, so if gravity has quantum properties, you don't need to worry about what happens when d -> 0. There is no "0" distance.


I guess, but it depends on how one parses the poster's setup. You're correct that for particles interacting under a 1/r^2 force, the energy turns out finite in quantum mechanics. My comment was referring to the fact that once you quantize the field that gives rise to that force, the infinities return, but for a different reason.


I like that for a full decade, people discussed measurements at reputable labs indicating that certain radioactive species decayed at rates that varied >0.1% depending on the season, and explored possible neutrino flux influences.

The measurements were finally shown to be effects of the immediate environment on the measurement apparatus.

That detectors used in labs may vary with time by >0.1%, unknown to their users, seems pretty important. How did everybody involved not know?


Because people live in a busy world, where knowledge is not transferred with enough love and integrity. And also people are afraid to say "I don't know" and what little they know they tend not to share.

To make things more specific, those labs had uncertainty budget with something like 20 terms for the things they measured. Each of those terms had associated probability distribution etc. They had uncertainty budgets for all the methods they did etc., and some of those where probably dated, done by someone else, etc. etc. Who checks that? Is the check rigorous enough? Are some assumptions made that don't hold to scrutiny?

So it is actually very easy for error to creep in, I would say actually very likely.


Why does my bread last for a while and then go bad? Or is it constantly going bad and I just haven't hit critical mass of mold for me to notice?


Although your question was about mold, you may be thrilled to learn that the process of bread going stale is not fully understood. I found this to be a somewhat jarring claim in a review I read recently: https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1541-4337...


Correct. Mold will grow where there is food for it to grow, like bread. It will divide and grow exponentially, up to a point. That fact of exponents where they seem slow at first and then really rocket away? That's why mold seems to just appear overnight.


So it might be a fair statement to say that when I eat fresh bread there's already a molecular amount of mold already on it?


There's a molecular amount of mold already on everything. Mold spores are ubiquitous.


Why bicycles stay upright.

For every authoritative-sounding, in-depth explanation, there is an equally plausible, yet conflicting and contradictory alternative.


They don't, they fall down. Know why? Cause they are two tired!

I'll downvote myself


My theory: self-correcting action on the front tire when moving forward.

When the wheel pitches to the either side, the road under the bicycle pushes the wheel back to center.

When the bicycle leans to the side, the wheel pitches as well. Now the road under the bicycle pushes the wheel back towards center, but the angle of the tire to the road is also skewed, so the some of the force also gets translated into pushing the bike upright.

It's a little reminiscent of the self-centering action you get when you have a double-cone (or cylinder tapered on both ends) rolling down two rails.

I think if you fixed the front tire so there was no steering, you wouldn't get any stability from speed.


Two conditions are necessary for a bicycle to remain upright (at least with most riders.)

One is gyroscopic forces. Ever picked up a spinning hard drive? Notice that it feels strangely hard to turn in some directions? Same idea.

The other is the feedback loop consisting of the bicycle and its rider.

If the bike is stationary, it's hard to keep it upright because you have no assistance from gyroscopic forces. At low speeds, you have some assistance but not enough. At higher speeds, the bike wants to maintain its current orientation, and it's easy to feed in the slight corrective forces needed to keep it that way. Hop off the bike and it will keep going until something causes it to veer off course.

You can throw a ton of math at it, as in the paper mentioned elsewhere in the thread, but at the end of the day, gyroscopic forces and negative feedback are all that's necessary. The Schwab paper appears to show that the gyroscopic forces aren't necessary, but no bicycle in the real world is ever going to work that way except in rare corner cases, e.g., if you're one of those riders who can stay upright at a standstill.


That is indeed one of the authoritative explanations, and in a few seconds of ad-hoc search we may find at least two refutations from authoritative sources.

For example, gyroscopic-forces-as-stabiliser don't need the Schwab paper's "ton of math" to be undermined. A simple counter-rotating wheel was used empirically at Cambridge to show as much, alongside notes that gyroscopic forces are relevant to the dynamics of a loaded bicycle, but misconstrued; far from assisting to hold it upright whilst ridden, they induce instability at the beginning of a change in direction, and more so at speed, a phenomenon (counter-steering) familiar to cyclists and relied upon by motorcyclists.

Then there's a simpler observation that can be made: people have to learn to ride a bicycle. The fact it stays upright when rolling unloaded, but not when loaded, is indicative of how small the gyroscopic effect is, not how significant it is. Ergo, that argument would suggest, it is tiny shifts body position that contribute all of the stability.

And then others, proposing further explanations, etc etc ad nauseum.

I have come around to the view that in fact they don't stay upright, and they are almost always falling over, but in a many-branched configuration of the universe our observer effect sends us preferentially down the vanishingly unlikely path where they didn't, and there are an uncountably many alternatives of Me that have nothing but knee scars to show for it.


Sorry, I don't buy anything beyond what I stated. Those papers are exercises in violating Occam's Razor. Your quantum model makes more sense. :)

When you learn to ride a bike, you're simply training the feedback mechanism. Much like a PID controller, your brain has to keep track of the amount of error and null it out with proportional and integral terms (at least). Once those constants are dialed in, it's "just like riding a bike" -- they're yours for life.

Then there's the matter of learning which way to lean so that the gyroscopic instability inherent in turning doesn't send you into the nearest ditch...


Occam’s razor is for hypotheticals. You’re rejecting an actual physical demonstration, by a reader of engineering dynamics and vibration at Cambridge.

The feedback mechanism you’ve described is likely correct, and also a complete furphy, since the central nervous system is not part of the bicycle.

All of which is par for the course and rather confirms the point, viz. that people will happily hold forth on any explanation they care to latch on to, secure in the knowledge that the total absence of consensus makes it impossible to say, definitively, “that is wrong”


Great discussion, I agree with inopinatus who, if I am not mistaken, supports my position that the science is not quite out on this topic yet. Many explanations exist, none seem to be all-encompassing.

That being said, it doesn't answer the question posed in this forum: "What scientific phenomenon do you wish someone would explain better?"

Instead, it answers the question: "What scientific phenomenon do you wish someone could explain?"


I guess my question on this specific topic would be, "What aspects of bicycle operation are not adequately explained by gyroscopic action under the control of a human rider?" That would give us a good basis for inquiry into other possible principles.

Sure, a bike could work some other way, but my point is, it doesn't need to. Anyone who has ever picked up a hard drive should understand how a bicycle remains upright. What else is there to know? It's not like an airplane wing, where the "obvious" conventional wisdom is inadequate, misleading, or incomplete.


Well, apparently it is.


My hunch is that the science is not quite out on it yet and that the intuitive explanations are not accurate.

For example, see "Some recent developments in bicycle dynamics" (2007). Especially the folklore section:

"The world of bicycle dynamics is filled with folklore. For instance, some publications persist in the necessity of positive trail or gyroscopic effect of the wheels for the existence of a forward speed range with uncontrolled stable operation. Here we will show, by means of a counter example,that this is not necessarily the case.

https://pdfs.semanticscholar.org/bb70/d679c5a2ff67dd2a1a51f2...


Maybe when you stop to think, the balance of explanations tips over one way


Entropy. Sometimes you read that it's a measure of randomness; sometimes, information. Aren't randomness and information opposites?


In my opinion, saying entropy is a measure of randomness is confusing at best and wrong at worst.

Entropy is a the amount of information it takes to describe a system. That is, how many bits does it take to "encode" all possible states of the system.

For example, say I had to communicate the result of 100 (fair) coin flips to you. This requires 100 bits of information as each of the 100 bit vectors is equally likely.

If I were to complicate things by adding in a coin that was unfair, I would need less than 100 bits as the unfair coin would not be equally distributed. In the extreme case where 1 of the 100 coins is completely unfair and always turns up heads, for example, then I only need to send 99 bits as we both know the result of flipping the one unfair coin.

The shorthand of calling it a "measure of randomness" probably comes from the problem setup. For the 100 coin case, we could say (in my opinion, incorrectly) that flipping 100 fair coins is "more random" than flipping 99 fair coins with one bad penny that always comes up heads.

Shannon's original paper is extremely accessible and I encourage everyone to read it [1]. If you'll permit self-promotion, I made a condensed blog post about the derivations that you can also read, though it's really Shannon's paper without most of the text [2].

[1] http://people.math.harvard.edu/~ctm/home/text/others/shannon...

[2] https://mechaelephant.com/dev/Shannon-Entropy/


Information is actually about _reduction_ in entropy. Roughly speaking, entropy measures the amount of uncertainty about some event you might try to predict. Now, if you observe some new fact that has high (mutual) information with the event, it means the new fact has significantly reduced your uncertainty about the outcome of the event. In this sense, entropy measures the maximum amount of information you could possibly learn about the outcome to some uncertain event. An interesting corollary here is that the entropy of an event also puts an upper bound on the amount of information it can convey about _any_ other event.

I think one frequent source of confusion is the difference between "randomness" and "uncertainty" in colloquial versus formal usage. Entropy and randomness in the formal sense don't have a strong connotation that the uncertainty is intrinsic and irreducible. In the colloquial sense, I feel like there's often an implication that the uncertainty can't be avoided.


I would recommend the substitution s/randomness/uncertainty/ since it seems to be the more useful concept. With the equivalence between the two ways of thinking becomes more clear. The uncertainty you have before learning the value of a bit, is equal to the information you gain when learning it's value.

Let's use an analogy of a remote observation post and with a soldier sending hourly reports:

    0 ≝ we're not being attacked
    1 ≝ we're being attacked!
Instead of thinking of a particular message x, you have to think of the distribution of messages this soldier sends, which we can model as a random variable X. For example in peaceful times, the message will be 0 99.99% of the time, while in war times could be 50-50 in case of active conflict.

The entropy, denoted H(X), measures how uncertain the central command post have about the message before they will receive, or equivalently, the information they gain after receiving the message. The peace time messages contain virtually no information (very low entropy), while wartime 50-50-probability messages contain H(X)=1 bit each.

Another useful way to think about information is to say "how easy would be to guess the message" instead of receiving it? In peacetime you could just assume the message is 0 and you'll be right 99.99% of the time. In wartime, it would be much harder to guess---hence the intuitive notion that wartime messages contain information.


Another useful topic to understand that's related to this: Hamiltonian Mechanics (https://en.wikipedia.org/wiki/Hamiltonian_mechanics), which is a whole other way to express what Newton described with his physics, but in pure energetic terms.

Entropy is usually poorly taught, there's really three entropies that get convoluted. The statistical mechanics entropy, which is the math to describe random distribution of ideal particles. There's Shannon's entropy, which is for describing randomness in strings of characters. And there's classical entropy, which is to describe the fraction of irreversible/unrecoverable losses to heat as a system transfers potential energy to other kinds of energy on its way towards equilibrium with its surroundings or the "dead state" (which is a reference state of absolute entropy).

These are all named with the same word, and while they have some relation with each other, they are each different enough that there should be unique names for all three, IMO.


Entropy is measured in bits (they exact same bits we talk about in computing), this can help unify the ideas of information and randomness:

Which file can contain more information: a 1.44 MB floppy disk or 1 TB hard disk?

Which password is more random (i.e. harder to guess): one that can be stored only 1 byte of memory or one that is stored in 64 bytes?

Information theory deals with determining exactly how many bits of it would take to encode a given problem.


Let me give an analogy and then a solution to your paradox.

Temperature. Sometimes you read that it's a measure of warmth; sometimes cold. Aren't hot and cold opposites?

Yes, hot and cold are opposites, but in a way they give the same kind of information. That's also true for information and randomness. Specifically, little randomness means more (certain) information.


The more randomness, the more information bits you need to encode the observed outcome. But I can see where your dissonance comes from: you probably parsed "information" as "information I already have about the system", not "information I need to describe the system state".


Judging by the answers to your question(most of them wrong or with serious misunderstandings) it seems you hit the nail straight in the head.


The way I think of entropy like the capacity of the channel. The channel can be filled with less information than the capacity, but not more.


Why are clouds flat on the bottom?


Because that's (roughly) the Lifted Condensation Level: https://en.wikipedia.org/wiki/Lifted_condensation_level

Water vapor around the LCL starts condensing and turning from a gas into liquid cloud droplets. This process happens considerably faster once it begins for a variety of reasons, so once you can have cloud droplets, you get a ton of cloud droplets - not a gradual transition from water vapor to cloud. It's almost like a light switch.

Most air masses are relatively homogenous anyways, so unless there are underlying processes causing things like undulatus asperatus, it will certainly appear very, very flat over a large area.


Same reason a loaf of bread is. Clouds rest on the "surface" of the higher-density air below them, and flatten on the bottom because of their own weight.


Gravity wells. I only realised in my 20s that the only reason satellites can orbit the Earth without crashing into the ground is by going sideways really, really fast. So as they inch closer to the ground, they also travel parallel to the ground fast enough so that they stay approximately the same height from the ground.


This is Newton's Cannonball. Honestly, I've found the best way to learn more about orbital mechanics is with a simulator - Kerbal Space Program is a fun version.


What other subjects would simulation enlighten people on?


Turing equivalence comes to mind.


Human Biology, particularly the "bigger scale" things like joints & ligaments.

I have arthrtic knees, and I'd like a better understanding of how joints work, and where the various clicks, pops and swellings come from.

It's easy to find really simple things, but harder to understand "how things go wrong".


This won't help with everything you asked for but there is neat piece of software called Complete Anatomy that helped me with this sort of thing.

It renders an interactive model of the human body and you can toggle different layers, from bones to nerves, to various layers of muscles and ligaments. It also contains animations of treatment exercises/stretches, surgeries and highly detailed models of various biological components.

It helped me understand my injury and why certain exercises help. It's paid software but it comes with a free trial.


I've always had trouble following Searle's Chinese Room argument as it applies to the nature and identification of intelligent action. I've never understood how the Chinese Room shows what its adherents say it shows, so that would be one topic I'd like to see more perspectives on.


"Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room. "

I have trouble with this too. I think it's actually incorrect, or at least misleading. I think what it's _trying_ to say is that even if an entity can perform a complex task doesn't mean it can understand a complex task.

I think the more important result of this argument is that certain complex tasks can be "pre-baked" into rulesets _by an existing intelligence_. To me this just means that intelligent entities can sort of copy parts of their intelligence into other entities which are not intelligent i.e. computer programming.

I think with this argument they're trying to say "a series of sufficiently complex if statements isn't necessarily intelligent" by choosing something we know computers are good at - string manipulations and applying it to something we consider intelligence - language translation.

The argument holds that the computer is obviously not intelligent because it's just a function that takes a character and outputs another character.

But it needs to be a convincing translation, right? The computer would then be able to spit out not just accurate translations but also properly converted cultural idioms and new combinations of words where one didn't exist in the other language. That requires context of surrounding characters, memory of common language use, statistical analysis and creativity.

One implication that arises from this argument is actually about humans. How do we know that we aren't all just incredibly detailed rulesets ourselves without any actual understanding?

Well, first off - we technically can't prove it for anyone other than ourselves. More pragmatically, it's obvious that we, unlike the computer translator, can probe ourselves and be probed by others on whether or not we understand the subject. It's not like we're a bunch of Boltzmann's Brains that just happened into existence. We evolved intelligence in order to survive, not to "trick" other intelligent beings into thinking we're more intelligent than we are. There's no need for that. There's no one smarter around that we need to "trick".


I have wondered if heat and photons impart a partial charge on atoms and molecules which causes several phenomena. Faster Brownian motion due to the increased repelling action of stronger charges which cause pressure/volume changes in gasses.

Also are these charges responsible for some weather effects such as the jet stream. In a tornado is the negative charges on the dry side of the dry line interacting with the moist air on the wet side really just a local intense acceleration of the dry air trying to "get to" the oppositely charged moist air?

Are the rotation of low and high pressure systems basically due to the same condition? Is lightning also just basically a flood situation of the charges?


Charge can be pretty easily shown to be a conserved, quantized value. So chargeless particals imaprting a partial charge would be a big no-no.


I didn't mean it was equivalent to the charge of an electron. What I was wondering is how do heat and light impart more kinetic energy to physical objects? Light may be the quantity of a photon but heat seems to be more of an analog property. And what are some ramifications of these energized particles in small spaces like enclosed gasses or large unenclosed spaces with effects like the weather.


Spin aka intrinsic angular momentum


Basocally, we can think of momentum in two parts: the momentum of an object orbitting another, and the momentum of the object spinning on an axis.

At the subatomic level, we observe that electrons have some extra angular momentum, beyond what we'd expect from their "orbits". We call that spin, because it's intrinsic, like the spinning of a macroscale object.


What's the point of defining summation methods for divergent series? Do they have any connections to any other area of mathematics or physics? Is analytic continuation of complex analysis relevant to these things? How about p-adics?


This is not an answer as this is something I've only just started wondering myself, but, if I understand correctly, perturbation theory [1] uses divergent series (and wants divergent series in certain conditions over convergent series(?)) in it's methodology.

I've started but haven't finished the physics lectures by Card Bender on mathematical physics, where he features perturbation theory prominently [2].

If someone could chime in on this, I would also be appreciative. Also if someone has better resources to learn about perturbation theory, I would also be appreciative.

[1] https://en.wikipedia.org/wiki/Perturbation_theory

[2] https://youtu.be/LYNOGk3ZjFM


Consciousness.

How is it possible that the thread is up 5 hours and ctrl-f consciousness returns nothing?


Because no one knows what it is or even agrees on a definition. Consciousness is a pre-scientific concept at this point.

There are attempts to rigorously define it. I'm currently reading this paper, but not really convinced: https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...


Here are things that I consider to be defining properties of human consciousness:

a) The ability to observe an instance of a multidimensional universe, each instance a mapping of a point moving along a continuous trajectory whose direction is constrained by the arrow of time (i.e., increasing entropy)

b) the ability to impart a force to change the direction that trajectory follows


Consciousness has no properties, it can't be studied scientifically.


Evolution.

How does large scale randomness result in such complicated and intelligent systems, while after decades of research and all the computing power we have today, we still struggle to model and reproduce the intelligence of an insect.


Agreed. I find it difficult to even ask for a deeper understanding of evolution without first explaining yes, I know basically how DNA works; yes, I understand natural selection; no, I'm not a young-earth creationist. I have a solid grasp of basic biology and genetics and I'd just like to comprehend how randomness can lead to such incredible complexity of organisms within billion-year timescales (and not, say, million-trillion-trillion year timescales).

I feel as though it's simply Occam's Razor to assume that evolutionary complexity is the result of randomness because I know of no better explanation. Is there a self-reinforcing process at play? (Natural selection partially counts as reinforcing, I just feel like randomness is still the engine that powers it).


Four billion years is a lot of time for that randomness (via selective pressure of the environment or of life itself) to produce tons of crazy results.


Simulating trillions of moving parts is not the same as explaining or even understanding something.

If you want to feel the truth of evolution in your bones, you really do need to be familiar with biology on both the molecular and cellular level. You can get a feel for it with less, but it won't ever be obvious how and why it works unless you know it at that level. I don't mean to sound exclusionary - it really just does require a ton of background knowledge.


"The Selfish Gene" was the book that finally made this click for me.


Unfortunately, The Selfish Gene leaves it an open question of how the brain developed its capacity to represent and model the world, which is the very notion of intelligence.

From the book:

> The evolution of the capacity to simulate seems to have culminated in subjective consciousness. Why this should have happened is, to me, the most profound mystery facing modern biology. There is no reason to suppose that electronic computers are conscious when they simulate, although we have to admit that in the future they may become so. Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself.


Low, medium, and high-frequency radio waves (below 30MHz, wavelengths longer than 10 meters) are a form of EM radiation. Which means that an antenna is 'transmitting' low-energy, long-wavelength photons. (Which gets us into the whole wavicle question.) But the signal from a radio station appears to be continuous.

So: how to picture this? Is the signal made of discrete 'photons' overlapping, or combined somehow? Or is it that the 'wave-like' aspect of these photons is so predominant at these frequencies? (I've grappled with this one for a long time.)


A radio station in the 1MHz range is transmitting in the 10s of kW range. The energy of a single photon is tremendously small. So if you divide out the large transmitting power by the energy of a single photon, you'll see that an unimaginable number of photons is being released, giving the impression of a continuous process.

More detail:

Say you have an AM station that transmits at frequency f = 650kHz and uses power P = 50kW.

The equation for a photon's energy is E = hf, where E is the energy, h is Planck's constant and f is the frequency. Here h = 6.6310^(-34) Js and f = 6.510^5 Hz. Thus the photon's energy is E = 4.3110^(-28) J. This is very tiny number.

The number of photons per second is n = P/E = 1.1610^32.

Let's try to visualize this. Avogadro's number is 6.022*10^23 for each mol of something, so if we divide it out from n, we see that there are almost 200 million mol of photons being released every second!

Water is 18g/mol, which takes up about 16 cm^3. 200 million moles of water is about a million gallons. If a photon was like a water molecule, a "water AM station" would be releasing about a million gallons of water per second.


Thanks, Vlasev, helpful ... easy to scale down to my hometown's measly 2,000 gallon AM station. I guess I'll need to go back to the radio physics math and puzzle it out more (directional arrays, groundwaves, and the like.) The idea of a photon with such a miniscule energy yet a wavelength of a thousand meters is a tad non-intuitive to me.


I had a very mathy explanation of Spinodal Decomposition in my graduate work. I wonder if there's a more intuitive explanation than just "that's how the energy landscape works".


Hmmm, you might look up how dendrites form. They’re roughly analogous in how the thermodynamics are favorable to forming more complex structures, IIRC. But it’s easier/more intuit to see how dendrites form. Dendrite formation is also a huge problem in many fields, like electronics manufacturing (eg tin whiskers).


Quantum entanglement. How do they know it's happening?



The Quantum Computing for Computer Scientists video by Microsoft Research (https://www.youtube.com/watch?v=F_Riqjdh2oM) really hammered it home for me. It goes slowly and the presenter, in his own words "shuts up and does the math".


The periodic table.

Can the properties of the elements be computed from the first principles of particle physics, or do you need to observe the atoms in real life to figure them out? For example, some isotopes are stable and others have a finite half-life. Can you know beforehand or you have to observe the decay? Can you compute exactly the mass of each atom without measuring it? Can you know compute its electronegativity? Etc.


Basic probability. How come when I flip a coin it is just as likely to come up heads or tails even if it has just been heads 5 times in a row?

I understand the physical properties of the coin make it so it is an independent event but if I were to run the experiment multiple times the number of times it would be heads after 5 heads would not be an even probability, it would be unlikely since 6 heads in a row is a rare event.


Yeah, but 5 heads followed by 1 tail is also a rare event. Basically, after drawing 5 heads you have a choice between two rare events which are equally probable. The same after drawing anything else, not just 5 heads.


this 3blue1brown video might answer your questions

https://m.youtube.com/watch?v=8idr1WZ1A7Q


The mathematics of training a neural network. I understand how they work once trained, but that you can train them almost seems too good to be true.


You likely understand minimizing a continuously differentiable function. Now, you are minimizing a continuously differentiable error function (which measures the difference between the output of your hypothesis function and actual data), with respect to adjustable weights and biases that determine the value for the neurons going from one layer to the next. The complexity is in that the hypothesis function is a composition of many functions due to the layering, and there usually are a large number of neurons. However, you are basically doing the same thing many times.


A partial answer is that big fully connected neural networks _are_ pretty much untrainable. Neural networks only became successful once programmers started constraining the space they were optimizing over pretty radically (like requiring convolutional layers if you know you are trying to detect something local).


Also, for me, how all these advances in ML/computing are alleged to be on the horizon, when I hear that A/D/C(NN)s are actually so slow in learning. How can something that has to be trained with 1M of <xyz> be smart? What's the next thing?


You and I probably took 4 or 5 years to recognize the alphabet. Cut the machines some slack.


On the contrary, it is pretty simple, it is just the same process of "a little bit to the left, a little bit to the right" when you are trying to hang a poster.


On the contrary, this is a problem that goes "complicated-easy-complicated" the more you think about it. You're at stage two, so here's the mindfuck: Why aren't we constantly stuck at local minima? Surely these problems we're throwing at ANNs aren't all convex.


How do you KNOW you are not a local minima? You are putting the cart before the horse.


Lagrangian and Hamiltonian mechanics. That is usually presented in a very mysterious manner. Also I think most engineers do not get taught these.


I didn't see good explanation of Einstein's twin paradox. Have you seen one? I have seen only clear explanation based on fact that observer will feel acceleration when he will come back and this is what breaks symmetry, but it appears that explanation based on acceleration are wrong, because it seems that acceleration is not necessary to explain this paradox.


Double slit experiment. There's a lot out there detailing what happens but relatively few explanations go into why observing the state of a photon results in it "choosing" one slit over the other. I suspect many come away with the impression that it's magic despite there being a reasonably simple explanation for what's going on.


This video explains the double slit experiment using probabilities and it's the only one that made sense to me without bringing in quantum mystery: https://www.youtube.com/watch?v=iyN27R7UDnI&feature=share

It also goes on to explain the delayed choice quantum eraser experiment but I don't think that's quite convincing.



Electricity. What is it? Is it related to electrons? I thought it was? If the plug in my house is AC then why is one wire "hot" and one "neutral?" (on plugs in North America at least) if there is a ground wire then why are plugs polarized? Why do some things come with a three prong plug and others don't?


Plasma ball, and how to create one. I’ve seen two of those with my own eyes: One in nature after a lightening strike to a lone maple tree in a pasture and one accidentally man-made via 50,000 VDC. And none of the literature shows how how such a plasma ball can travel in a fairly straight line at steady level from the ground.


Morphological development of organisms. Mainly, how do cells form the shape of the organism and how do they specialize to different type of tissues? How do cells know where they are? What is the mechanism that puts the constraints on the shape of the body? I can understand mitosis but the rest from there is magic.


The cosmic inflation. Yes, the universe is smooth but how is that enough to justify this unimaginable expansion? There seem to be no proof or possible experiment yet the scientific community apparently accepted this theory with its abrupt start and abrupt end, only secondary to the big bang itself in its scale and energy.


BlackHoles - where does the 'extra' mass/gravity come from?

I've been watching a lot of documentaries lately, and I can't figure out how a star that _radiates_ light, collapses and suddenly light can't escape? Doesn't that mean the blackhole has more mass/gravity then the star that created it?


What teachers tell you in school is that as you get close to a uniform sphere the gravitational force increases with the square of the distance to the center of such sphere.

What they don't tell is that once inside the sphere the force decreases linearly, because the planetary mass ahead of you is partially balanced by the mass behind you.

With this you can see that the gravitational pull of a planet/start/spheroid is largest at its surface. So, if something happens to make a start shrink by some factor, the gravity pull at its surface is increased by the square of this factor, even if the mass of the star remains the same.

Actually I believe starts eject a lot of mass when they become black holes, and this is just a Newtonian argument for a phenomenon intrinsically relativistic, but I hope you get the gist of it.


Yeah - a lot of mass gets 'blown away' which really made it even stranger that gravity increased. While I found your explanation illuminating, I still don't see how you can end up with 'more' gravity then you started with? Mass determines gravity right ? not volume ?


It doesn't, it's just a lot denser. A star has a lot of inner pressure from the ongoing fusions inside of it. When these stop or reduce, the star collapses from its own gravity, as soon as its radius is smaller than the event horizon (i.e. the radius at which the escape velocity exceeds the speed of light) you have a black hole.


That's basically what I keep seeing in the documentaries, but it's still not sinking in. For instance, being denser I could see it making a 'deeper' 'dimple' in the rubber membrane demos they use...but it would also be a much smaller diameter dimple, so only things really close and really small could 'fall' into it ?


How a single hormone can regulate so many different things. Looking at Wikipedia, a single hormone like progesterone which is apparently one molecule regulates a plethora of effects. How is this implemented? As a coder, if I had to use one variable to simultaneously regulate 10 different things I would go crazy.


Fermat's theorem: a^n + b^n = c^n For n > 2, why are there no integers a, b, and c that satisfy the equality?


One of my advisors in college said "The proof takes about 10 years of graduate mathematics to understand".


It's not my specialty, but there are easier proof today. Probably 5 years of graduate math is enough ;).

@GP: I'd recommend to try to read an understand the proof for N=3. (And why that approach does not extend to bigger N.) It requires only undergraduate level math and it is much much much easier. It uses very different tools, so it will give you very little insight of the general proof, but it will give you some taste of the problems of the proof.


Is there really no intuitive way to communicate the answer to that question without needing 10 years of grad math? I find that to be somewhat hard to believe


No, there is no intuitive way to communicate it. The theorem has taken >350 years to prove, which makes it clear that the proof is not some intuition that was somehow missed by hundreds of people for centuries.

Fermat's Last Theorem (book) by Simon Singh is the source to check out if you're interested in the details of how it eluded mathematicians and a general idea of how the problem was solved, without getting too technical. It's a great story well told.


Thanks for the recommendation good sir!


Well, the answer is "simple": It's because the modularity theorem was proven (or better, the Tanyiama-Shimura conjecture was proven)

But why that solves the problem? Because it connects two branches of mathematics (modular forms and elliptic equations) in a way that proves that equations of that form cannot exist (where the exponent is > 2)

Though there probably is an easier way of explaining it, it is strongly suspected that Fermat got the wrong idea there.


I still like the idea that Fermat had a legit proof and that one day a simpler one will be found.

I also like that FLT follows easily from the Beal conjecture, which seems overlooked. Maybe its overlooked because its closely related to some other (harder to understand) conjectures.


If there were a simple and intuitive way to communicate the answer then I would suggest that we probably would have figured it out in our 300+ years in which this was one of the most famous unanswered questions in mathematics.


the fundamentals of computer science. Unlike many who comment here regularly, I am not a programmer or developer and though it might seem silly, the way in which a bunch of code written in a programming language of any sort translates to a computer physically or electrically causing things to "happen" because of it has always been a bit fuzzy. I know that electronic systems and machines translate all instructions to binary code but still, from there how does the rest happen? such as an OS on my laptops working the way it does, or more specialized: an unmanned spacecraft autonomously navigating its way through the solar system and doing complex physical tasks. Anyone have a suggestion for a good starting point on learning through these fundamentals and on upwards?


“Code: The Hidden Language of Computer Hardware and Software” by Charles Petzold should be what you are looking for. Author describes computing from the very ground up, and in clean, approachable manner. https://en.wikipedia.org/wiki/Code:_The_Hidden_Language_of_C...


It's a long road to the top (i.e. something like a python program), but you can start at the bottom and try to understand how basic hardware gates (and/or/xor/register) work and can perform extremely elementary yet useful tasks.

Then it's just a "simple" matter of stacking up a billion of the things to get them to do complex programs.


There's a great course called NAND to Tetris, which teaches, by example, at increasing levels of complexity starting from simple logic gates up to an actual program. It's very good. https://www.nand2tetris.org/


Read "Charles Petzold - Code: The Hidden Language of Computer Hardware and Software", it is a good starter.

https://www.nand2tetris.org/ may also be insightful, but I did not look further into to.


Thanks for all the replies and suggestions everyone. I'll go through these very sources one by one.


Protein Folding


What would you like to know about protein folding? How it even happens, or how we are trying to "figure it out"?


Isn't protein folding considered one of the biggest problems ever?


sure is.. but somehow biology manages it with few issues. It seems like it has a lot to do with the presence of chaperone proteins that babysit the protein as it's coming off the ribosome, preventing pieces of it from sticking together that shouldn't touch, etc.

https://en.wikipedia.org/wiki/Protein#/media/File:Chaperonin...

I'm working on making a model of this chaperone complex relative to a folded protein to get a sense of how it might be interacting with the amino acid chain before it becomes globular


Protein folding is amazing! Seems like it happens in realtime as it's synthesized by the ribosome.. I'll dig around and see if I can find a better resource for you.


Genetics. I still have no intuition for how combining my wife’s and my DNA has resulted in children with traits from both of us. My brain always tries to imagine interleaving two binaries and hoping for a resulting program that works a bit like the two sources, which it of course wouldn’t.


Don't think if it as a monolith. It's more like thousands of microservices and thousands of different binaries all interacting in a whole bunch of different and convoluted ways. Sexual reproduction is jumbling together two different people's thousands of microservices and seeing what happens. Or, each program is relying one library, imagine just randomly mixing around different libraries. Genes and proteins are discrete parts that can be swapped out and in.


The analogy works if each binary way laid out the same - the first MB is eye color, the 2nd MB was hair color, etc. Each segment of code is also duplicated.

When you produce sex cells, you body splits the code randomly. The cell might contain the first copy for hair color, but the 2nd copy for eye color. This happens in both the sperm and eggs.

Then when they combine you have a full set of genetic data again, but it’s a random selection of 50% of you DNA.

The fun part is that the code itself determines which copy is dominant. Your offspring has a copy of your eye color data, and your wife’s.

Added on that is that the two copies combine to produce the outcome. Depending on the code, the dad’s copy might dominate, or the mother’s, or the both of them can produce yet a 3rd outcome.


I mean that's close, but not compiled binaries, just merged in very similar source code in a pull request, where sometimes an && is added to a conditional statement to express a gene that had skipped a generation.

Both sexes have haploid gametes, which form a zygote when combined. I think this can steer your research when you look into what gametes contain and chromosome combination.


Hawking Radiation. I know just enough to know that the story of "anti-particle of virtual pairs happens to always fall in the hole" is a Lie Told To Children, but the explanations seem to go straight from there to rather dense math and I've never wrapped my head around it.



Actually... yes, which just makes it worse that something about that presenter's style really rubs me the wrong way. Do they have their content in text format somewhere? Anyway, thanks!


I'm sure somone's made transcripts but I have't come across them yet


That's not even a lies-to-children explanation--it's just straight-up wrong, a result of generations of "science journalists" misunderstanding stuff and spreading their misunderstandings around.

Hawking radiation does not discriminate between matter and anti-matter. Either form of any type of particle can be emitted.


Water pressure (like PSI) and how it relates to water flow (like GPM).

It's difficult to relate the two together and even after hearing every heuristic and every cutesy analogy, I still can't quite wrap my head around what happens to one when I increase the other (and so on).


Interesting that you mentioned exosome, and no one else did. That in itself should give a hint on why people cannot do a better job at explaining stuff. It's complicated, , not well understood, extremely messy and does not fit well in a textbook.


I'd love to see shows like 'Nature' and 'Planet Earth' but focused exclusively on single celled organisms; I find that whole world very interesting. Journey to the Microcosmos on youtube is the closest thing I've found.


Cells at work! https://en.m.wikipedia.org/wiki/Cells_at_Work!

It’s an animated series that takes place inside the human body. I’ve been meaning to watch it myself. It’s supposed to be pretty accurate.


Relativity.

I just can't grok it.

I can't understand how time would flow differently depending of your speed.

I don't get why C is a constant no matter the referential, for any other object the speed is relative to your referential. I just don't see how those 2 are compatible.


Frame dragging. How does space know a body is rotating if the body is "smooth"?


"Know" in which way, what kind of interaction? What about magnetic field due to poles rotation?


The Coandă effect [1] and how it applies to plane wings and sailboats.

[1] https://en.wikipedia.org/wiki/Coand%C4%83_effect


Why does quantum mechanics need complex numbers ?

Every time I read an introductory QM book / article, the complex numbers just come out of nowhere and no one bothers to explain how that makes any kind of physical sense.


Imaginary numbers, pretty much everywhere in physics, represent an extra dimension. e^ix results in a unit vector spinning around.


Thanks for your answer, but that doesn't really cut it for me. For example, if complex numbers are so "natural" at modeling rotations, why don't they show up e.g. in Maxwell's equations ?


The unification of electo-magnetism. I know the classical equation well, but never formally studied physics and have never been able to get my head around Maxwell's equations.


The LIGO detector. I've never heard a logical explanation for it. If your explanation is that gravitational waves stretch and squash spacetime so light takes different amount of time to bounce from an emitter back to a measurer, then you don't have even a slight understanding of how it works. If your explanation doesn't involve higher spatial dimensions (not time) then you don't understand it. If you haven't even considered higher spatial dimensions when explaining LIGO then you shouldn't even try for the incorrect explanation above because you don't have any of the pieces of the puzzle.


> If your explanation is that gravitational waves stretch and squash spacetime so light takes different amount of time to bounce from an emitter back to a measurer, then you don't have even a slight understanding of how it works.

Well, I'll bite. I'm a physicist and I understand LIGO. What's your alternative explanation?


I have no idea how it works, I don't have an explanation, I just know the one spouted by people doesn't make sense


Well, if you're uncomfortable with LIGO, you're in good company. Once a whole decade went by in physics where people couldn't agree if it would work in principle or not. And it is true that a lot of common explanations are bad (e.g. "it's just a ruler" is not complete by itself because "why doesn't the light get stretched too?"). Nonetheless, today we have a variety of independent explanations.

Maybe you'll find this paper helpful: https://aapt.scitation.org/doi/10.1119/1.18578


Would you say it's accurate to say LIGO is a big interferometer measuring changes in lengths of the arms due to gravitational waves, with some extra analysis to figure why it still works even though the light waves get stretched? Or is OP correct that such an explanation means I "don't have even a slight understanding of how it works" and a correct explanation must "involve higher spatial dimensions (not time)?"


I'd say that's a perfectly fine explanation. And in fact the light does get stretched. For sufficiently low frequencies, that's fine because there's enough time for new (unstretched) light to enter and exit the apparatus. At higher frequencies this causes the sensitivity of LIGO to fall.

Part of the popular confusion around how LIGO works is the freedom in coordinates: there are different, perfectly good definitions of space and time you can use, and the explanation sounds different in each one. So people can get them mixed up. For example, my previous paragraph makes sense in "transverse traceless gauge", but not in others.

I'm not sure what GP was referring to with "higher spatial dimensions".


Thank you for this link. I read it once and think I've gotten more of the picture and I'm going to re-read it to see if I can get more. What's baffling to me is everyone who has tried to explain the LIGO detector doesn't even realize this question exists. I've independently thought this question and when people start explaining LIGO to me, and I take the time to spell out the question, they realize they don't understand LIGO either.

If light is emitted at a constant wavelength independent of the stretching of the universe, doesn't that imply light is traveling through a higher spatial dimension, otherwise the emitter itself would be stretched with the universe and we'd never be able to observe differences in the speed of light? If I understand this paper, once light is emitted, it's "stuck" to space and will stretch along with it. But if the emitter wavelength stays constant doesn't that imply it's waving through a higher dimension?


> If light is emitted at a constant wavelength independent of the stretching of the universe, doesn't that imply light is traveling through a higher spatial dimension, otherwise the emitter itself would be stretched with the universe and we'd never be able to observe differences in the speed of light? If I understand this paper, once light is emitted, it's "stuck" to space and will stretch along with it. But if the emitter wavelength stays constant doesn't that imply it's waving through a higher dimension?

I'm not totally sure what you mean by a higher dimension. The properties of the emitter (which is, e.g. a laser cavity) aren't affected by the gravitational wave because the emitter is a rigid body, which doesn't get stretched. (It's the same thing as described here: https://news.ycombinator.com/item?id=22990753 ) So it puts out light of a given frequency.

By contrast, LIGO is not a rigid body, because the mirrors at the ends of the arms hang freely, hence allowing gravitational waves to change the distance between them.

> What's baffling to me is everyone who has tried to explain the LIGO detector doesn't even realize this question exists. I've independently thought this question and when people start explaining LIGO to me, and I take the time to spell out the question, they realize they don't understand LIGO either.

Yup, it generally is the case in physics that over 95% of people who claim they can explain any given thing don't actually understand it! But the professionals are aware. I even know a LIGO guy who goes to popular talks armed with a pile of copies of the paper I linked.


I've read this paper twice now and it's very good. I still don't have a fundamental understanding, I'm having trouble visualizing what happens over time with the stretching and squashing of space vs the light in the tunnel. I've come away with this paper with more questions than answers, but I think all of it is there if I study it hard enough.

- It sounds like (based on the answer you linked) the "expansion" of the universe is a lie in the sense that the fabric of space is not actually expanding, things are just moving farther away from each other via motion. So it's not points on a balloon being blown up (in which case the points themselves are also growing in size), but a force pushing things apart

- If that's true, then I don't understand why the paper talks about light waves expanding with the cosmological expansion, implying that the fabric of space itself is indeed expanding, and talks about how that makes the doppler effect make no sense since the light wavelength expands with the universe. It sounds like there's a fundamental incompatibility with your explanation ("this doesn't expand small objects") and the paper ("the wave itself expands with the expanding space in which it travels, so that its wavelength grows with the cosmic scale factor"), which implies the fabric of space, eg all particles, is expanding

- It sounds like light "sticks" to space as space expands, but new light emitted after an expansion will still have some constant wavelength. So in this way in an expanding universe, light which has a constant wavelength will have further to go between particles, so light will appear to slow down as the universe expands

- If light does stick to space, and the fabric of space is expanding, then I never realized that the doppler effect makes no sense for measuring cosmological expansion, because we wouldn't be able to see it (hinted at in the paper)

- Maybe I don't fully understand why LIGO needs two arms. If you had a clock that could accurately measure light wave crests, could you do it only with one arm? i'll take a leap of faith in believing that a gravitaional wave compresses in one dimension and expands another (maybe not if the wave hits it exactly at 45 degrees?). Maybe the two arms are just for convenience to get phase difference for free?

- I think what I'm missing still is what is actually being measured and how it happens. Space expands, the wave gets longer in one direction, so it has further to go (only for a fraction of time), and it will take longer for the next crest to get to the detector, (I guess the crest itself is still moving at C? but through a farther distance?) so for a tiny blip of time, there will be a phase difference, not for all the light in the arm but just for the one or few crests that make it back along the further length until the wave resets the overall distance?

- Does space compressing and expanding prove that its compressing and expanding through a higher dimension? Especially if new light emitted is at some constant wavelength independent of the stretching of the spacetime it enters? Does that also imply this constant wavelength is happening independent of our (3d) space stretch, so it's a constant through some higher dimension?


Hi, sorry, but I don't have time to answer all the questions (because each answer would have to be at least 10x longer than the question, so my answer would have to add up to at least 5000 words)! However, these are all good questions and I would urge you to check out a resource like Physics StackExchange (the physicist's equivalent of StackOverflow). Many of these have been asked before, and the new ones could get very informative answers.

> - Maybe I don't fully understand why LIGO needs two arms. If you had a clock that could accurately measure light wave crests, could you do it only with one arm?

Yes, that's absolutely right. The two arms cancel out the frequency fluctuations of the laser itself. If we had a perfectly stable laser, we could make do with just one arm.

Regarding the question about what stretches and what doesn't, I think the general rule is that rigidity prevents "stretching". For example, a hydrogen atom in expanding space would lose momentum over time, because it redshifts, but the atom itself wouldn't get any bigger. There's no need to invoke a higher dimension here, just some things are rigid (like laser cavities and the Earth) and some things aren't (like electromagnetic waves). In fact, in general invoking higher dimensions without a strong reason to is discouraged when discussing general relativity, simply because the math is already very complicated in 4D.


I'd really love an explanation of what specific impulse is. I've looked it up a few times but the units seconds^-1 confuses me, what does this represent?


I think the unit is just one of those weird dimensional equivalences that pop up from time to time. E.g. fuel efficiency in cars is measured as distance / volume (of fuel consumed), so it's dimensionally equivalent to area^-2. But we don't use this because "1 mile per gallon" makes a lot more sense than "42.5 cm^-2".


Quantum computers. I still don’t get it. What are they useful for, what are it’s computing bounds and why it’s a big deal. When will the future actually come ?


How to design compact antenna without those math voodoo.


We aren't short of interesting problems.

To pick a couple fo them randomly, understanding amyotrophic lateral sclerosis or alzheimer would be terrific starts.


Superconductive flux pinning. Specifically, I'm curious what behavior it'd exhibit if used as a core for superconducting inductors.


Quantum Physics. Still can’t wrap my mind around multiverse being a simpler explanation then the future holds data we won’t have in the now.


If you remember your linear algebra basics, you could easily pick up QM --- the parts that deal with finite size systems. Here is an excerpt from my book I posted a while back in another thread, and refreshed today since you asked: https://minireference.com/static/excerpts/noBSLA_quantum_cha...

As for the multiverse, I don't know enough to talk about it. I just know it's one of the possible interpretations of quantum mechanics. Note that the various interpretations are generally considered more philosophy than science, and have no (or very little) practical implications. I would suggest ignoring all analogies and not looking too deeply for interpretations, and instead focus on basic concepts like "What is a quantum state?" and "How do I compute measurement outcomes?" which are super well understood and the same under all interpretations.

You can think of the various interpretations of QM as different software methodologies, scrum, agile, waterfall, etc. just stuff people like talk about endlessly, but ultimately irrelevant to the code that will run in the end.


The Oberth Effect. I’ve seen a bunch of awful attempts at the “why” and some “well it’s just in the math, so there” but nothing satisfying.


I answered this here: https://physics.stackexchange.com/questions/287101/where-doe...

The basic answer is that the extra energy that goes to the rocket comes from harvesting the kinetic energy that the fuel itself had by virtue of being in the moving rocket.


Why is it that the smartest people in various fields constantly need reference materials, but medical doctors never look anything up?


They look things up constantly, you just don't see it because it's typically very boring and not a good use of time for you or the MD. They get symptoms from you and then will go off to research. Also, a broken arm or strep isn't very difficult to figure out. ~80% of visitations are fairly simple.

The site most MDs use is here: https://www.uptodate.com/home


thanks for the reference. I'll taking a peek.


They look stuff up all the time. Most physicians try to stay on top of developments in their field by reading scientific journals. Most also have access to subscription services which will provide background and lay out current best practice guidelines / treatment recommendations for almost every disease. Another poster mentioned uptodate which is one of the most popular examples. It's not uncommon for a physician to reference this with the patient still in the room.


I've read that med school is really heavy on memorization.

Cynical answer: the culture of medicine was set during a 2000+-year period when physicians harmed more than helped; the way they made a living was by appearing authoritative.


First principle reasoning in real life examples.


I think we should have this thread more often.


- How computers translate bits into electrical signals

- What does it mean the universe is expanding

- Bayesian statistics

- How information is stored in magnetic tapes


First one's easy. You've got a bunch of electronically controlled switches (transistors), and you encode a higher voltage as 1, and something near ground as 0.

You can build logic gates out of 2 or 3 transistors, and combine those logic gates into more complex gates until you've got a computer.

But how does a transistor work? Basically, you've got semiconducting materials of two types (Phosphorus or Boron doped silicon), one of which wants a few more electrons to be a conductor, and the other wants a few less electrons to be a conductor. If you stick the two types next to each other, the electron-wanting (N) one snatches up the electrons from the electron-offering (P) one, and you get an Electric field going from the P to N. Now, that alone makes a diode. Already cool, nonlinear electronics. But what if we go N-P-N? Now we've got two electric fields, going opposite directions. With three leads, you can adjust the strength of those electric fields with one, creating a variable resistor, a transistor.


Electricity.

Electricity is always explained by its effects, but never by its actual nature. I'd like better explanation :-)


How did Mendeleev and his peers knew something was the final indivisible element and not just another molecule?


It was pretty wild wild west, really.

The idea that some some compounds didn't just contain some fire was still common when the first list of elements was put together. A big leap forward was realizing air had two principle components, burn-y air and not burn-y air.

They figured out water wasn't an element when the burn-y air and some mystery gas burned to make it.

Basically, everything was maybe an element until they either broke it into pieces, or made it out of other stuff.


The twin paradox. All explanations seem to be just "something something one twin has to accelerate"


The amount of time you experience is directly related to the length of your worldline--the path you trace out in 4D spacetime as you move through space and simultaneously move forward in time.

It should be easy to see that a non-accelerating object (or person) will trace out a straight worldline. If you ever change your velocity, though, either through smooth acceleration or instantaneous rotation of your velocity vector, you will trace out some non-straight curve in spacetime. If you leave your friend behind and then, at some later time, meet back up again, if you did not undergo exactly the same amount of acceleration throughout your journeys (i.e., because one of you stayed behind and hardly accelerated at all, tracing out a boring straight line path), then you will have different world-line lengths (different "path integrals") between the starting and ending points, and thus will have experienced different amounts of subjective time.

Now, in a Euclidean spacetime, the traveling twin would end up older, because a straight line is the shortest distance between two points. But our spacetime is not Euclidean--it is a Minkowski space, in which acceleration is equivalent to a hyperbolic rather than Euclidean rotation of your velocity vector, so it turns out that straight line is actually the longest distance between any two points, and the twin who leaves and comes back will have a shorter worldline, and thus will have aged less.


Can anyone explain disc loading and solidity in a propeller and if there are any equations governing them?


Disc Loading is just the force on the actuator disc (such as a rotor, in which case the force is the weight of the vehicle the rotor holds aloft) divided by the swept area (pi times r squared).

It's used when discussing propulsive efficiency, as it's a proxy measurement for how much "work" each blade is doing. Because propeller/rotor blades are just high-aspect wings, if you have high disc loading your blades are at a high lift coefficient which means they'll be incurring lots of lift-induced drag which increases your power requirements.

Solidity in the same context refers to the amount of volume within an actuator disk that's occupied by actual solid material. If you have a 4-bladed rotor and you move to a 5-bladed rotor, all else equal, you've increased your solidity.

There are many many equations, and as most things in fluid mechanics you can get as deep into the weeds as you want. As a starting point, have a look at the wiki article for Blade Momentum Theory[0]

[0] - https://en.wikipedia.org/wiki/Momentum_theory


Thanks na85

Let's take a contrived example. I have a 4 engine high wing airplane with 2 bladed 55 inch props on each engine with 100kg force (1 kg-f is equal to 9.8 N) thrust per engine.

Now, I need to make that a low wing airplane so need to change to 22 inch blades with the same thrust and I don't want to change engines. So I want to add more prop blades. How many more blades do I need to add?


Couple of points:

1) 100kg is a measure of mass, not force. Thrust is a force, not a mass. But let's say you have 100N of thrust per engine.

2) Why would changing from a high-wing to a low wing monoplane require you to add prop blades?


Sorry, edited question


So, your question is a little off, because propeller thrust changes dramatically with airspeed. Propeller aircraft are typically referred to as being "power producers", i.e. we talk about power when we analyze props, not about thrust, because the mechanics and mathematics behind propeller thrust are annoying and complex.

But to get to the heart of your question: You want to reduce the prop diameter (ground clearance perhaps? The engineer in me asks why you don't just make the landing gear taller) and not change the engine. You don't actually have to add blades (maybe). You could also just make the existing blades fatter, or change the airfoil, or increase the propeller RPM. Lots of ways to attack that.

But, playing along that adding blades is the only way:

1) Take your existing high-wing plane and calculate power required for all phases of flight: Take-off, climb, cruise, etc.

2) Take your new low-wing plane with its smaller prop diameter and work backwards to ensure you can actually meet the power requirements to stay aloft through your envelope. Adding blades reduces efficiency because the blades' wakes interfere with each other so you'll have to dig into some experimental data based on the prop of your choice. Much depends on blade pitch and washout.

Very likely you will need to increase the RPM (if your engine can deliver enough power) or change engines to a more powerful model because your props are now less efficient.

Such is the nature of aircraft design - almost nothing can be changed without affecting something else.


Wave-particle duality (of e.g. light)


The usual misunderstanding is that light is sometimes a wave and sometimes a particle, but that is wrong. Light is always a weird thing, that you have never seen before in macroscopic objects, and you need to use some math to describe.

In some experiments the weird mathematical thing can be approximated as an almost classical particle. That approximation simplifies the calculation a lot, and sometime you can get some result intuitively. But it is never true, it is only a very good approximation.

In some experiments the weird mathematical thing can be approximated as an almost classical wave. That approximation simplifies the calculation a lot, and sometime you can get some result intuitively. But it is never true, it is only a very good approximation.

Try to read again everything you have read about the subject, but every time the text says "here light is a wave/particle" use a red marker to rewrite that sentence as "here light can be approximated as a wave/particle".


That is "less of an issue" when considering https://en.wikipedia.org/wiki/Quantum_field_theory and Second Quantization

Basically: particle are the quanta of waves. So it's not really a duality in the end.


Non-linear optic explanation from quantum standpoint (classical explanation is quite clear).


Kalman filters. The explanations always start easy and then get too confusing.


Einstein's (summarized) quote, "Time is a persistent illusion"


- Hegel. I haven't found any resource that can explain it clearly.


I’d suggest Kojeve’s Introduction to the Reading of Hegel (a classic expository work on the Phenomenology of Spirit)


How depression, autism and gender dysphoria work on a molecular level.


As for autism, it's related with a kind of general reduction of gene expression: https://www.researchgate.net/publication/328734204_A_theory_...


All the credible theories of why there are 3 generations of particles.


How EM radiations propagate through the air and what is an EM field.


Why is there no evidence of black matter annihilation in astronomy?


You mean dark matter annihilation? There are some hints: an excess of gamma rays detected by Fermi-LAT, an excess of X-rays detected by Chandra, excess ionization detected by EDGES, and excess positrons detected by AMS-02. However, all of these anomalies are contentious and could well be explained by systematic effects or unknown astrophysical sources.

If none of these hold up, it could be that dark matter just doesn't annihilate. That's not weird or anything -- I mean, your room is full of matter right now, and it's not annihilating itself either.


Eddy currents. Why does a magnet move slowly down a copper tube?


A changing magnetic field induces a current in a material, which creates an opposing magnetic field which slows the moving magnet.


The kinetic energy turns into heat due to resistance to the induced current?


Agile, an empirical survey of whatever the hell it actually is.


Agile is a short-lived V cycle repeated for ever.

Legacy V-cycles (needs - spec - code - test - integration - product) were such that everything was written down and planned in advance for months/years. So, if the customer had made an error, his/her needs had changed... you were basically screwed.

Agile advocates for Short V cycles while getting often user feedback. But it's a V cycle.

- PO speaks to the customer = get needs - PO writes tickets, UX design something = specification - And then it follows the classical cycle : develop, test, integrate, deliver.

What's remain around agile (ceremonies & co) feel much more like bullshit to me, and people follow it religiously without understanding the core idea of agile as they think V cycle is an insult.


Wavelets: Usually given explanations are hard to understand.


Amperes, Voltage, Watts, without using a liquid analogy.


ionization. The resources I can find on it are mostly conspiracy theory types or the same basics explained over and over again.


Axial flux motors


Gauge theory.


Quantum stuff


Why a flat earth is impossible.


If you could make a disc shaped planet appear with a magic wand, and imparted it an initial angular momentum, over time the gravity differential between the middle and the edge would either tear your flat planet apart, or shape it into a blob, and eventually a sphere.


Gravity. Gather enough stuff, and you end up with balls (unless you spin things really fast).


You'd see Mount Everest.


How airplane wing really works.



thanks!



thanks!


Reactance.


If you have a specific question about electrical reactance, I can help. If not electrical, I probably can't help since I don't know what you are referring to.


why does blender at times become slower for no fucking apparent reason


Why the earth is a sphere.


Systems tend towards low-energy states (because small perturbations keep things from getting stuck at local minima or saddle points. The lowest potential energy configuration for a bunch of particles pulled together by gravity is a sphere, because every point is as close to to the middle as possible.


Human counsciousness.


1. Carbon dating. Sure, I get that carbon decays over time and this changes the proportion of isotopes. But why does this give you any information? That carbon didn’t come into existence just to be in that bone, it was made in the sun billions of years before that, so why does the age of the carbon tell us anything about organic matter? The key fact, which I think is not emphasized enough, is that the ratio of isotopes in atmospheric carbon is kept at a constant equilibrium by cosmic rays. So you can use carbon dating to tell roughly when the carbon was pulled out of the atmosphere. Without this additional fact, the concept of carbon dating makes absolutely no sense.

2. The tides. The explanation I was given is roughly something like “the tides happen because the moon’s gravity pulls the water toward it, so you have high tide facing the moon. There’s also a high tide on the opposite side of the earth, for subtle reasons that are too complicated for you to understand right now and I don’t have time to get into that.”

The first problem with this explanation is this: gravitational acceleration affects everything equally right? So it’s not just pulling on the water, it’s also pulling on the earth. So why does the water pull away from the earth? Shouldn’t everything be accelerating at the same rate and staying in the same relative positions?

The second problem is that, when viewed correctly, the explanation for why there is a high tide on the opposite side of the earth as the moon is equally simple to why there is a high tide on the same side as the moon.

The resolution to both these problem is this: tides aren’t actually caused by the pull of the moon’s gravity per se, but are actually caused by the difference in the strength of the pull of the moon’s gravity between near and far sides of the earth, since the strength of the moon’s gravitational pull decreases with distance from the moon. The pull on the near water is stronger than the average pull on the earth, which again is stronger than the pull on the far water. So everything becomes stretched out along the earth-moon axis.

3. This one isn’t so much a problem with the explanation itself, more about how it’s framed. I remember hearing about why the sky is blue, and wondering, “ok, more blue light bounces off it than other colours. But isn’t that essentially the same reason why any other blue thing is blue? Why are we making such a big fuss about the sky in particular? ” A much superior motivating question is “why is the sky blue during midday, but red at sunrise / sunset”? I was relieved when I saw this XKCD that I’m not the only one who felt this way:

https://xkcd.com/1818/


Carbon dating works because the level of carbon 14 in an organism is relatively constant while it is alive. This is because carbon 14 is created in the atmosphere not the sun. https://en.m.wikipedia.org/wiki/Radiocarbon_dating


Carbon dating can't say anything about dinosaurs (in the colloquial definition) because dinosaurs went extinct >>50k years ago and CD doesn't work after such long time spans.


Ok fair enough. I admit I regret this comment and think I posted it a bit hastily, I tried to delete it a few minutes ago but it appears to be too late now. I edited out the part about dinosaurs though.


1) As other comment said, the C14 is produced in the atmosphere almost at a constant rate, and it decays at a constant rate. So as an approximation you can suppose that the concentration of the C14 in the air is constant. When the C14 is inside a dead body it is not longer replenished, so the concentration decrease slowly. [Note that the concentrations of C14 in the air actually changes. You can see in the Wikipedia article a table to fix the differences.]

For recent times, you can also compare the dates of the C14 with other methods like counting tree rings, or the date of a total eclipse and check the calibration.

2) You are almost right. The tides are not produced by the gravity of the Moon, but from the differences in the gravity of the Moon in the water that is nearby and the average of the Earth.

You forgot to include the centrifugal force [when you are in the non-inertial frame frame that rotates like the Earth-Moon system https://xkcd.com/123/ ]. The centrifugal force is bigger in the water that is in the more far from the Moon and again the difference creates the other tide.

3) The sky is blue because the single molecules in the air disperse the blue/violet color more than the other colors. There are many ways to produce colors. In this case the light is dispersed by the whole molecule.

A different method to produce blue is using a CD to produce a rainbow and the using a slit block the other colors. Some birds and butterflies use a somewhat similar method. [Not very similar but closer to the CD method than to the air method.]

The blue in the die for cloth uses another method. You make a long chain of conjugate chemical bounds C-C=C-C=C-C=C-C, and pick length and atoms so the electrons absorb the colors you don't like and transform the energy into heat.

I'm probably forgetting a few more method, there are many of them, so it's interesting to understand which of them make the sky blue.

*) These are good questions. My explanations are not 100% complete (and probably not 100% accurate) but I hope you can fix the holes.


Coroutines in C++.


Coroutines are most easily understood as a way to write a state machine in a way that looks like a function. I.e., it's just a notational trick to make one function do different things according to when it's called.

To see it, imagine you have a struct with a data member for each local variable of your function, and replace your function with a member function that has no local variables, but uses "this" to get at what was local data.

Add one more data member, a number that is set differently right before each place the function returns.

Finally, insert some code at the start of the function that, according to the number, jumps to just after the last return statement executed.

Then, each time you call the function, what happens depends on what happened last time.

There are more details, but that is the gist.

You can write that yourself in C++98, with the body of the function inside a switch statement. Getting it past code review would be the real challenge.


I think I get coroutines in theory. It's the C++ specific complexity that is the hardest barrier. Thanks for the explanation though


Yes, a lot of extra junk is needed to make them actually work, but the extra junk goes a long way toward obscuring what you actually need to know.

Ultimately, though, you are right that you have to understand it all, once, even if you can't remember it all a month later. The explanations I find online are not good at presenting just the details you need when you need them, and building up to the full picture.


Try to understand them in Simula first, since that's where C++ drew its original inspiration from.


The Coriolis Force


Drag a marker with a constant radial velocity (from your perspective) across a spinning disk. It'll trace out a curved line, so from the perspective of an observer on the disk, an acceleration must have been present.

The full 3d Coriolis force is more complicated than that (eg accounting for the Eötvös effect): The spinning disk example only gets you to the -2vω term (where v denotes radial velocity and ω angular velocity).


celullar replication and virus code insertion


Orbital mechanics


Have you heard of / played the game Kerbal Space Program?

I believe even folks at NASA have even said it helped cement their mathematical knowledge with a better intuitive understanding.


I've been meaning to do a bit of blogging about this since my thesis work was on gravitational dynamics. Is there anything in particular you would be interested in seeing explained?



Obligatory video, even if not very detailed https://www.youtube.com/watch?v=tJiAkBxuqfs


Fall and miss the ground forever.


Escape velocity


My dad was an aero space engineer, and I always had trouble with this. It is the speed you need to be going at, to not be pulled back by gravity. As a kid, I always thought, well big deal, if I am in a plane and just keep going up, eventually I will be out. The key I always missed was the velocity at that point was with no added power. So when the engine cuts off, what is your velocity? Is it enough to escape? Or do you need more power. Of course there is a good chance I still do not understand it.


Measure theory.


This is a very good playlist: https://www.youtube.com/playlist?list=PLBh2i93oe2qvMVqAzsX1K...

Starts from basic concepts and builds up a nice overview.


Conciousness.


Yoneda lemma


As with most things in category theory, the way to really understand the Yoneda lemma is to sit down and prove it yourself. Break the statement down into the underlying definitions, draw lots of diagrams, and convince yourself that it’s true.

The other thing you can do is think about what it means for particular types of categories. For a posetal category, it says that an element of a poset is uniquely determined by the set of all elements that come before it in the ordering. For a group, it says that every element is uniquely determined by its action on the group. (This is basically Cayley’s theorem.) See this MSE post for more intuition: https://math.stackexchange.com/questions/37165/can-someone-e...


Women!


Math!


electromagnetic waves


Haskell's Monads


coriolis force


string theory


why does blender become slower at times for no apparent reason


Reminds me of a story:

A guy was walking along the beach and found a lamp. Of course he rubs the lamp, and sure enough a genie appears.

Genie: master of the lamp I can grant you a wish, you may wish for anything.

Guy: Wait, isn't it supposed to be 3 wishes?

Genie: One or nothing, and do not wish for more wishes.

Guy thinks for a while ....

You know I have pretty much everything I need. But I have always wanted to travel to Hawaii. But I get sea sick and am afraid to fly.

Genie: Very well I will take you there.

Guy: No no, if you take me there, I wont be able to come back. And what about next year? Since I only get one wish, I want a bridge built to Hawaii.

Genie: That does not make any sense. Please make a different wish. One that does not involve so much construction.

Guy: hmmmm you know I know, can you explain women?

Genie: So do you want the bridge to be a suspension bridge or truss? and how many lanes ....




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: