Hacker News new | past | comments | ask | show | jobs | submit login
Understanding science funding in tech, 2011-2021 (nadia.xyz)
32 points by luu on March 5, 2022 | hide | past | favorite | 8 comments



Great read! Another phenomenon that's not explicitly mentioned, but is also important: the industry lab. DeepMind, Google Brain, FAIR, IBM Research, MSR all have big industry labs that mostly focus on CS/ML research, but the small percentage of that overall effort that spills over into the sciences is still significant. AlphaFold is well-known but there are many other contributions being made.


Agreed. DeepMind's AlphaFold (graph neural nets that solved protein folding according to the community's own definition) is amazing...

...But as a former academic, it also a huge warning bell on the current effective capability gap between academia and the big ~5 tech co's. The field is lucky Google liked the problem, but there isn't a satisfying reason academics didn't do it first and aren't now ahead. I keep coming to problems at the structural layers like low $ vs tech (brain drain), localized ownership incentives (small efforts), broken granting treadmill, etc.

The US gov spends so much here, and can do more, but the core seems broken. The crypto discussion in the article is fun, but the crypto aspect felt a bit of a headfake: it showed more that many scientists want a rethink, and while DAO isnt necessarily a good idea long-term (and giving little money today), it is air cover for such a rethink.


Speaking as someone who's done the whole tenure thing etc, the gap between what has happened in the nonacademic and academic sector in the last decade or so has been mind-boggling to me at times, at least in my field. It's not even the big 5 tech corporations. Trends in tech that are completely and overwhelmingly obvious, even normal to daily life, get treated like some kind of bleeding edge "newfangled" thing. As a result those in the academic field end up looking like luddites, even though it should be the other way around. Years after you suggest something normal, after something is well-established, someone gets a grant for something that should have been done years earlier, and it's praised as being cutting edge or something.

I have so much to say about this. Rigor is absolutely essential but something is seriously broken about incentives in publicly funded research.


Yes!

To your point, I've been watching the slow uptake of basic ideas in science departments. In this case, neural networks are still largely looked at either with suspicion (wrong principles) or as magic (too difficult) by top non-CS departments at top institutions. There are exceptions, but they're the ones that prove the rule.

The big ~5 (Apple, Microsoft, Google, Facebook, maybe Amazon), and then an even bigger universe of Chinese companies + consulting companies, are interesting to me because they solve scaling these ideas. They've been putting money into projects that academic teams largely aren't, collecting global-scale data that these teams aren't, and doing global-scale real-world experiments that academics aren't.

Ex: For something like healthcare, there isn't a good reason for Flatiron Health to be a leader while, to almost a rounding error, most real-world clinical data researchers are data-poor & AI-poor. While scientists are squabbling around open access for papers, imagine if the NIH+HHS+CDC+... required all publicly-funded genomic + health data be unified into a national database and the DOE provided AI compute infra for working with it. Instead, we have an area PI at every research university getting grants to make believe that their regional hospital network's tiny genome/ehr database will be the one that becomes that.


Unifying clinical and biological data is an incredibly challenging task for a variety of reasons. Private sector is much more well suited to tackle such problems because of the engineering and coordination required.


I'd have agreed with you 5-10 years ago.

- I'm interacting with and hearing a bunch of regional tech companies, hospital networks, and worse, consultant shops, doing what you're saying. For the most part, they're not that special. There are variants with unique twists (edge compute/ai/crypto/reselling/graph/..), but ultimately, not that many, and generally prioritize the same obvious data sources (epic, ..). Genome data isn't as standardized/centralized as top EHRs afaict, but even there, we're see common data formats crop up & getting popular. Likewise, every hospital network IT group is independently having to reinvent the wheel on things like access for their researchers.

- I'm not advocating they do all the things. Industry has value. (Ex: Imagine trifacta for everyone!). But this is a commons issue, and when industry owns it, that's a problem (ex: HHS groups ensnared by Palantir, or today's balkanized approach to real-world data). But for meat and potatoes of EHRs/labs/genomes, as part of the continued digital transform these systems are doing, targeting baseline standards & timely data submission isn't that crazy. Likewise, VA and other ~federated national groups already do have (bad) centralized database + compute facilities. I hear about their problems every ~night ;-) But as a baseline, a lot of what is a scramble today would become easy.

Getting back to the main point. The top 5 tech co's are quite used to working at nation/global scale for data stuff, including for PII & AI, so there's technical precedent. Smaller nations actually already do centralize this stuff - so it's not even without precedent in terms of government. The US gov is intentionally spending much more money on doing it much tinier & weaker.


As someone involved in a lot of neuroscience initiatives, I think the integration of science funding in technology is really just so exciting. I cannot wait to see what the field brings in the next 50 years (if we all make it that long ;-) )


I was happy to see this but it was difficult to discern what the underlying arguments were, and I think the author maybe overreaches a bit. It took me reading through a lot of what they wrote to understand they seem to be arguing for more researcher-based (as opposed to project-based funding). I although I completely agree, I think a lot of what's cited is a little misleading or one-sided. I also think in general, focusing too much on any single funding mechanism could backfire.

In one of the pieces, for example, they discuss the frequently cited Dutch (?) study showing that people just above and below the cutoff in grant scores (presumably the same in actual proposal quality) end up differing in subsequent funding and probability of attaining full professorship. They argue that citation rates don't differ, though, claiming this is a more important metric, and using it as an example of again, how the individual researcher is paramount.

However, doesn't that pattern just sort of point out problems with the funding effects? That is, it seems to be saying that those with slightly lower scores were more likely to give up trying at all, and were less likely to obtain material resources and recognition, even though they were just as impactful scientifically speaking, at least as far as citations are concerned. It's also the case that this grant funding mechanism is just one way in which structural factors can affect the career trajectories of talented scientists -- there are many others.

I've grown very suspicious of many of the sorts of studies the author cites, mostly because most of the problems I've seen in biomedical academics game the system so subtly but pervasively. That is, many of the hypothetical patterns they point to could just as easily be explained by any number of causal explanations once you start to include corruption and deceit in your models. The models they discuss are largely predicated on different hypotheses about talent, and ignore manipulation of the system metrics dishonestly.

The whole "funding of individual investigators" model seems promising but also should be setting off alarm bells everywhere. Isn't this the point of tenure? If we are talking about funding models like this seriously, what does that say about the actual state of tenured positions and what's going on? It has this comical quality, like "if only there were some system of funding people without strings attached so they could pursue their own intellectual interests..."

You could make the argument that there aren't enough tenured positions, which is fair. But what about how those tenured positions are funded? They are seen now as largely dependent on obtaining funding through typical means, which then means you're selecting based on ability to bring in income through traditional grant projects. But then why not just do that?

It's obvious something is broken.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: