The functional predictions related to "non-coding" variants are big here. Non-coding regions, referred to as the dark genome, produce regulatory non-coding RNA's that determine the level of gene expression in a given cell type. There are more regulatory RNA's than there are genes. Something like 75% of expression by volume is ncRNA.
It's possible that the "functional" aspect of non-coding RNA exists on a time scale much larger that what we can assay in a lab. The sort of "junk DNA/RNA" hypothesis: the ncRNA part of the genome is material that increases fitness during relative rare events where it's repurposed into something else.
On a millions or billions of year time frame, the organisms with the flexibility of ncRNA would have an advantage, but this is extremely hard to figure out with a "single point in time" view point.
Anyway, that was the basic lesson I took from studying non-coding RNA 10 years ago. Projects like ENCODE definitely helped, but they really just exposed transcription of elements that are noisy, without providing the evidence that any of it is actually "functional". Therefore, I'm skeptical that more of the same approach will be helpful, but I'd be pleasantly surprised if wrong.
You know the corporate screws are coming down hard, when the model (which can be run off a single A100) doesn't get a code release or a weight release, but instead sits behind an API, and the authors say fuck it and copy-paste the entirety of the model code in pseudocode on page 31 of the white paper.
Please Google/Demis/Sergei, just release the darn weights. This thing ain't gonna be curing cancer sitting behind an API and it's not gonna generate that much GCloud revenue when the model is this tiny.
This is a strange take because this is consistent with what Google has been doing for a decade with AI. AlphaGo never had the weights released. Nor has any successor (not muzero, the StarCraft one, the protein folding alphafold, nor any other that could reasonably be claimed to be in the series afaik)
You can state as a philosophical ideal that you prefer open source or open weights, but that's not something deepmind has prioritized ever.
I think it's worth discussing:
* What are the advantages or disadvantages of bestowing a select few with access?
* What about having an API that can be called by anyone (although they may ban you)?
* Vs finally releasing the weights
But I think "behind locked down API where they can monitor usage" makes sense from many perspectives. It gives them more insight into how people use it (are there things people want to do that it fails at?), and it potentially gives them additional training data
All of what you said makes sense from the perspective of a product manager working for a for-profit company trying to maximize profit either today or eventually.
But the submission blog post writes:
> To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future. We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.
And at that point, they're painting this release as something they did in order to "advance scientific research" and because they believe "AlphaGenome can be a valuable resource".
So now they're at a cross-point, is this release actually for advancing scientific research and if so, why aren't they doing it in a way so it actually maximizes advancing scientific research, which I think is the point parent's comment.
Even the most basic principle for doing research, being able to reproduce something, goes out the window when you put it behind an API, so personally I doubt their ultimate goal here is to serve the scientific community.
Edit: Reading further comments it seems like they've at least claimed they want to do a model+weights release of this though (from the paper: "The model source code and weights will also be provided upon final publication.") so remains to be seen if they'll go through with it or not.
To be clear: I agree that opening up model + weights makes it possible for third parties to distill or fine tune
If you look at the frenzy of activity that happened after midjourney became accessible, that was awesome for everyone. Midjourney probably got help running their model efficiently and a ton of progress was quickly made.
I'm pretty sympathetic to a company doing a windowing strategy: prepare the API as a sort of beta release timed with the announcement. Spend some time cleaning up the code for public release (at Google this means ripping out internal dependencies that aren't open source), and then release a reference inference implementation along with the weights.
That's pretty reasonable. I wanted to push back on this idea that "the reason Google isn't dropping model + weights is because the corporate screws are coming down hard"
Google isn't waiting to release the weights so that they can profit from this. It's essentially the first step in the process, and serving via API gives them valuable usage data they they might not get if/when it's open sourced
I feel like this take is missing a sense of balance. You can have a goal of advancing scientific research while also still making money. You don’t have to choose one extreme end of the scale.
I’d argue that the product providing some monetary value for Google will help ensure that this team doesn’t get moved some more profitable project instead. That way they can continue improving this tool and make more tools like it in the future.
I think that from a research/academic view of the landscape, building off a mutable API is much less preferred than building of a set of open weights. It would be even better if we had the training data, along with all code and open weights. However, I would take open weights over almost anything else in the current landscape.
If it came to light that somebody found a way to use this API in a way that is harmful to society would you be happy that Google could revoke access? Or unhappy?
This is a real tradeoff of freedom vs _. I agree that I'm not always a fan of Google being the one in control, but I'm much happier that they are even releasing an API. That's not something they did for go! (Of course there was a book written so someone got access)
If it came to light that somebody found a way to use this API in a way that is beneficial to society, would you be happy that Google could revoke access? Or unhappy?
For AlphaFold3 (vs. AlphaFold2 which was 100% public), they released the weights if you are an affiliate with an academic institution. I hope they do the same with AlphaGenome. I don't even care about the commercial reasons or licensing fees, it's more of a practical reason that every research institution will have an HPC cluster which is already configured to run deep learning stuff can run these jobs faster than the Google API.
And if they don't, I'm not sure how this will gain adoption. There are tons of well-maintained and established workflows out there in the cloud and on-prem that do all of these things AlphaGenome claim to do very well - many that Google promotes on their own platform (e.g., GATK on GCP).
(People in tech think people in science are like people in tech just jump on the latest fads from BigTech marketing - when it's quite opposite it's all about whether your results/methods will please the reviewers in your niche community)
> Once the model is fully released, scientists will be able to adapt and fine-tune it on their own datasets to better tackle their unique research questions.
This is in the press release, so they are going to release the weights.
EDIT: I should have read the paper more thoroughly and been more kind.
On page 59, they mention there will be a source and code release.
Thank you Deepmind :)
I can guarantee you that some smart person actually thinks that the opportunity size is measured as a fraction of the pharmaceutical industry market cap.
Doesn't matter if that person can't convince the bean counters and lawyers that it is in the company's long-term interest to release the "proprietary" data.
I wish there's some breakthrough in cell simulation that would allow us to create simulations that are similarly useful to molecular dynamics but feasible on modern supercomputers. Not being able to see what's happening inside cells seems like the main blocker to biological research.
Molecular dynamics describes very short, very small dynamics, like on the scale of nanoseconds and angstroms (.1nm)
What you’re describing is more like whole cell simulation. Whole cells are thousands of times larger than a protein and cellular processes can take days to finish. Cells contain millions of individual proteins.
So that means that we just can’t simulate all the individual proteins, it’s way too costly and might permanently remain that way.
The problem is that biology is insanely tightly coupled across scales. Cancer is the prototypical example. A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale. And it works the other way too. Big changes like changing your diet gets funneled down to epigenetic molecular changes to your DNA.
Basically, we have to at least consider molecular detail when simulating things as large as a whole cell. With machine learning tools and enough data we can learn some common patterns, but I think both physical and machine learned models are always going to smooth over interesting emergent behavior.
Also you’re absolutely correct about not being able to “see” inside cells. But, the models can only really see as far as the data lets them. So better microscopes and sequencing methods are going to drive better models as much as (or more than) better algorithms or more GPUs.
Simulating the real world at increasingly accurate scales is not that useful, because in biology - more than any other field - our assumptions are incorrect/flawed most of the time. The most useful thing simulations allow us to do is directly test those assumptions and in these cases, the simpler the model the better. Jeremy Gunawardena wrote a great piece on this: https://bmcbiol.biomedcentral.com/articles/10.1186/1741-7007...
STATE is not a simulation. It's a trained graphical model that does property prediction as a result of a perturbation. There is no physical model of a cell.
Personally, I think arc's approach is more likely to produce usable scientific results in a reasonable amount of time. You would have to make a very coarse model of the cell to get any reasonable amount of sampling and you would probably spend huge amounts of time computing things which are not relevant to the properties you care amount. An embedding and graphical model seems well-suited to problems like this, as long as the underlying data is representative and comprehensive.
In my field, we're always wanting to see what will happen when DNA is changed in a human pancreatic beta cell. We kind of have a protocol for producing things that look like human pancreatic beta cells from human stem cells, but we're not really sure that they are really going to behave like real human pancreatic beta cells for any particular DNA change, and we have examples of cases where they definitely do not behave the same.
So very similar approach to Conformer - convolution head for downsampling and transformer for time dependencies. Hmm, surprising that this idea works across application domains.
I don't think DM is the only lab doing high-impact AI applications research, but they really seem to punch above their weight in it. Why is that or is it just that they have better technical marketing for their work?
Agreed, there’s been some interesting developments in this space recently (e.g. AgroNT). Very excited for it, particularly as genome sequencing gets cheaper and cheaper!
I’d pitch this paper as a very solid demonstration of the approach, and im sure it will lead to some pretty rapid developments (similar to what Rosettafold/alphafold did)
They have been at it for a long time and have a lot of resources courtesy of Google. Asking perplexity it says the alphafold 2 database took "several million GPU hours".
DeepMind/Google does a lot more than the other places that most HN readers would think about first (Amazon, Meta, etc). But there is a lot of excellent work with equal ambition and scale happening in pharma and biotech, that is less visible to the average HN reader. There is also excellent work happening in academic science as well (frequently as a collaboration with industry for compute). NVIDIA partners with whoever they can to get you committed to their tech stack.
For instance, Evo2 by the Arc Institute is a DNA Foundation Model that can do some really remarkable things to understand/interpret/design DNA sequences, and there are now multiple open weight models for working with biomolecules at a structural level that are equivalent to AlphaFold 3.
Money and resources are only a partial explanation. There’s some equally and more valuable companies that aren’t having nearly as much success in applied AI.
There are more valuable companies but there aren't companies with more resources. If apple wanted to turn all their cash pile into something like Google's infrastructure it would still take years
"To ensure consistent data interpretation and enable robust aggregation across experiments, metadata were standardized using established ontologies."
Can't emphasize enough about how DNA requires human data curation to make things work, even from day one alignments models were driven based on biological observations. Glad to see UBERON, which represents a massive amount of human insight and data curation of what is for all intents and purposes a semantic-web product (OWL based RDF at the heart) playing a significant role.
this is such an interesting problem. Imagine expanding the input size to 3.2Gbp, the size of human genome. I wonder if previously unimaginable interactions would occur. Also interesting how everything revolves around U-nets and transformers these days.
You would not need much more than 2 megabases. The genome is not one contiguous sequence. It is organized (physically segregated) into chromosomes and topologically associated domains. IIRC 2 megabases is like the 3 sd threshold for interactions between cis regulatory elements / variants and their effector genes.
Or to a man with a wheel and some magnets and copper wire...
There are technologies applicable broadly, across all business segments. Heat engines. Electricity. Liquid fuels. Gears. Glass. Plastics. Digital computers. And yes, transformers.
When I went to work at Google in 2008 I immediately advocated for spending significant resources on the biological sciences (this was well before DM started working on biology). I reasoned that Google had the data mangling and ML capabilities required to demonstrate world-leading results (and hopefully guide the way so other biologists could reproduce their techniques). We made some progress- we used exacycle to demonstrate some exciting results in protein folding and design, and later launched Cloud Genomics to store and process large datasets for analytics.
I parted ways with Google a while ago (sundar is a really uninspiring leader), and was never able to transfer into DeepMind, but I have to say that they are executing on my goals far better than I ever could have. It's nice to see ideas that I had germinating for decades finally playing out, and I hope these advances lead to great discoveries in biology.
It will take some time for the community to absorb this most recent work. I skimmed the paper and it's a monster, there's just so much going on.
I understand, but he made google a cash machine. Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B. a 10x profit growth at this scale well, its unprecedented, the numbers are inspiring themselves, that's his job. He made mistakes sure, but he stuck to google's big gun, ads, and it paid off. The transition to AI started late but gemini is super competitive overall. Deepmind has been doing great as well.
Sundar is not a hypeman like Sam or Cook, but he delivers. He is very underrated imo.
Like Ballmer, he was set up for success by his predecessor(s), and didn't derail strong growth in existing businesses but made huge fumbles elsewhere. The question is, who is Google's Satya Nadella? Demis?
Since we're on the topic of Microsoft, I'm sure you'd agree that Satya has done a phenomenal job. If you look objectively, what is Satya's accomplishments? One word - Azure. Azure is #2, behind AWS because Satya's effective and strategic decisions. But that's it. The "vibes" for Microsoft has changed, but MS hasnt innovated at all.
Satya looked like a genius last year with OpenAI partnership, but it is becoming increasingly clear that MS has no strategy. Nobody is using Github Copilot (pioneer) or MS Copilot (a joke). They dont have any foundational models, nor a consumer product. Bing is still.. bing, and has barely gained any market share.
People now days don't understand how genius MS was in the 90s.
Their strategy and execution was insanely good, and I doubt we'll ever see anything so comprehensive ever again.
1. Clear mission statement: A PC in very house.
2. A nationwide training + certification program for software engineers and system admins across all of Microsoft's tooling
3. Programming lessons in schools and community centers across the country to ensure kids got started using MS tooling first
4. Their developer operations divisions was an insane powerhouse, they had an army of in house technical writers creating some of the best documentation that has ever existed. Microsoft contracted out to real software engineering companies to create fully fledged demo apps to show off new technologies, these weren't hello world sample apps, they were real applications that had months of effort and testing put into them.
5. Because the internet wasn't a distribution platform yet, Microsoft mailed out huge binders of physical CDs with sample code, documentation, and dev editions of all their software.
6. Microsoft hired the top technical writers to write books on the top MS software stacks and SDKs.
7. Their internal test labs had thousands upon thousands of manual testers whose job was to run through manual tests of all the most popular software, dating back a decade+, ensuring it kept working with each new build of Windows.
8. Microsoft pressed PC OEMs to lower prices again and again. MS also put their weight behind standards like AC'97 to further drop costs.
9. Microsoft innovated relentlessly, from online gaming to smart TVs to tablets. Microsoft was an early entrant in a ton of fields. The first Windows tablet PC was in 1991! Microsoft tried to make smart TVs a thing before there was any content, or even wide spread internet adoption (oops). They created some of the first e-readers, the first multimedia PDAs, the first smart infotainment systems, and so on and so forth.
And they did all this with a far leaner team than what they have now!
(IIRC the Windows CE kernel team was less than a dozen people!)
There was some innovation - and some good products ( MS office stands out for me ) - however what MS did relentlessly well, as you mentioned, was sales, distribution and developers.
They also leveraged their relationship with Intel to the max - Wintel was a phrase for a reason. Companies like Apple faltered, in part, in the 90's because of hardware disadvantages.
Often their competitors had superior products - but MS still won through - in part helped by their ruthlessly leveraging of synergies across their platforms. ( though as new platforms emerged the desire to maximise synergies across platforms eventually held them back).
That aggressive, Windows everywhere behaviour, is what united it's competitors around things like Java, then Linux and open source in general which stopped MS's march into the data centre, and got regulators involved when they tried to strangle the web.
> the Windows CE kernel team was less than a dozen people!
It showed
CE was a dog and probably a big part of the reason Windows Phone failed. Migrating off of it was a huge distraction and prevented the app platform from being good for a long time. I was at Microsoft and worked on Silverlight for a bit back then.
Windows phone 7's kernel was amazing. It was a complete rewrite from the old kernel and had incredible performance, minimal resource usage, and an amazing power profile.
IMHO the reason for Microsoft's failed phone venture was moving onto the windows kernel and 2xing system requirements.
Really? It’s always felt to me like it was app availability — for all the efforts, the app marketplace was a fraction of a fraction of the competitions’, and much like the network effects in social media, if you can’t catch up quickly, it can be almost impossible to ever do so. Haemorrhaging billions per quarter takes a strong stomach and a long vision, one that’s likely to put any executive’s tenure at risk. Nevertheless, it interesting to think what things might’ve looked like had Microsoft persisted another decade.
One of my internships was at a company writing an example app for SQL server offline replication. Taking a DB that had changed while offline and syncing them to a master DB when reconnection happened. (Back in 2004 or so, now days this is an easier thing).
The company I interned at was hired by MSFT to write a sample app for Fabrikam Fine Furniture that did the following:
1. Sales people on the floor could draw a floorplan on a tablet PC of a desired sectional couch layout and the pieces would be identified and the order automatically made up .
2. Customer enters their delivery info on the tablet.
3. DB replicated down to the delivery driver's tablet PC when the driver next pulls into the loading bay with all the order info.
4. After the delivery is finished and signed for on the tablet PC, the customer's signature is digitally signed so it cannot be tampered with later.
5. When the delivery truck pulls back into the depot, SQL server replication happens again, syncing state changes from the driver back to the master DB.
That is an insane sample app, just one of countless thousands that Microsoft shipped out. Compare that to the bare bones hello world samples you get now days.
> Azure is #2, behind AWS because Satya's effective and strategic decisions
I am going to have to disagree with this. Azure is number 2, because MS is number 1 in business software. Cloud is a very natural expansion for that market. They just had to build something that isn't horrible and the customers would have come crawling to MS.
You could just as easily make the argument that cloud is a very natural expansion for Google given their expertise in datacenters and cloud software infrastructure, but they are still behind. Satya absolutely deserves credit for Microsoft's success here.
Microsoft has become a lot more friendly to open source under Satya. VSCode, GitHub, and WSL happened during his tenure, and probably wouldn't have happened under Ballmer. Turning the ship from a focus on protecting platform lock-in to meeting developers where they are is a huge accomplishment IMO.
> Microsoft has become a lot more friendly to open source under Satya.
True, but that's just few open source projects, albeit influential ones. There are soo many other companies doing influential open source projects.
I dont disagree with anything you said because turning a ship around is hard. But hand-to-heart, what big tech company is truly innovating to the future. Lets look at each company.
Apple - bets are on VR/AR. Apple Car is dead. So it is just Vision Pro
Amazon - No new bets. AWS is printing money, but nothing for the future.
Microsoft - No new bets. They fumbled their early lead in AI.
Google - Gemini, Waymo ..
I think Satya gets a lot more coverage than his peer at Google.
Waymo and DeepMind and the TPU program all predate Sundar as CEO.
IMO Google should have invested more in Waymo and scaled sooner. Instead they partnered with traditional automakers and rideshare companies, sought outside investment, and prioritized a prestige launch in SF over expanding as fast as possible in easier markets.
In other areas they utterly wasted huge initial investments in AR/VR and robotics, remain behind in cloud, and Google X has been a parade of boondoggles (excluding Waymo which, again, predates Sundar and even X itself).
You could also argue that they fumbled AI, literally inventing the transformer architecture but failing at building products. Gemini 2.5 Pro is good, but they started out many years ahead and lost their lead.
Apple - have you used a Macbook recently - their ARM based product line is a big step forward - sure it's not self driving cars - but it's been the biggest jump in standard PC's for quite a while and has required innovation up and down the stack.
Microsoft - No new bets. Really? Their OpenAI deal and integrating that tech into core products?
Amazon - No new bets? It's still trying drone delivery, and it's also got project Kuiper - moving beyond data centres to providing the network
Diversifying Microsoft away from the traditional cash cow of Windows and Office is the single most important strategy for Microsoft and he executed it well.
His genius is really just making good bets on people, and letting them do their thing.
People like Scott Guthrie who was a key person behind dot.net, and went on to be the driving force behind Azure. Anyone who did any dot.net work 10+ years ago would know the ScottGu blog and his red shirt.
Google similarly bet on Demis, and the results also show. For someone who got his start doing level design on Syndicate (still one of my all-time favourite games) he's come a long way.
This is kind of bullshit. One can equally say Satya was setup for success by Ballmer as he stepped away graciously taking all the blame so new CEO can start unencumbered.
He might have delivered a lot of revenue growth yea, but Google culture is basically gone. Internally we're not very far from Amazon style "performance management"
Read back what you just wrote. It is literally "willy nilly".
"Somethings are because of CEO, and some things are in spite of CEO"
And it was "willy nilly" attributed that enshittification was because of CEO (how do we know? maybe it was CFO, or board) and Gemini because of Demis (how do we know? maybe it was CEO, or CFO, or Demis himself).
You're misunderstanding what he's saying. He's saying Google has started enshittifing products and Sundar gets the blame for that. Sundar is also the CEO so he gets credit for Gemini. Google's playbook is enshittification though and if Gemini ever gets a big enough moat, it will be enshittified. Even Gemini 2.5 Pro has gotten worse for me with the small updates and it's not as good when it first launched. Google topped the benchmarks and then made it worse.
I guess I don't understand why you so strongly believe that CuriouslyC's comment reflects an uninformed opinion without any basis in fact.
I see somebody saying something on here, I tend to assume that they have a reason for believing it.
If your opinions differ from theirs, you could talk about what you believe, instead of incorrectly saying that a CEO can only be responsible for everything or nothing that a company does.
Not really, pressure to move into AI is so vast that it in reality the CEO had little saying about moving into it or not, and they already had smart employees to make it a reality, vastly different that what happened with enshitification which Gemini is part of, just recently people were complaining that the turn off button was hijacked to start Gemini in their Android phones.
Demis reports to Sundar. All of Demis's decisions would have been vetted by and either approved, rejected, or refined by Sundar. There's no way to actually distinguish how much of the value was from whom, unless you have inside info.
Their brand is almost cooked though. At least the legacy search part. Maybe they'll morph into AI center of the future, but "Google" has been washed away.
World is much.. much bigger than HN bubble. Last year, we were all so convinced that Microsoft had it all figured out, and now look at them. Billion is a very, very large number, and sometimes you fail to appreciate how big that is.
Oh I'm conveying opinions other than mines, tech people I work with, that are very very removed from the HN mindset actually, were shitting on google search for a long time this week.
who didn't ? I meant in the future, if this becomes a long term fruitful economic value (sorry but video and image generation have no value, it's laughable and used for cheap needs, and most of the time people are very annoyed by it).
> The transition to AI started late but gemini is super competitive overall.
If by competitive you mean "We spent $75 Billion dollars and now have a middle of the pack model somewhere between Anthropic and Chinese startup", that's a generous way to put it.
Citation needed. Gemini 2.5 pro is one of the best models there is right now, and it doesn't look like they're slowing down. There is a LLM response to basically every single Google search query, it's built into the billions of android phones etc. They're winning.
By competitive, i mean no.1 in LM arena overall, in webdev, in image gen, in grounding etc. Plus, leading the chatbot arena ELO. Flash is the most used model in openrouter this month as well. Gemma models are leading on device stats as well. So yes, competitive
Except coding, where it’s essentially middle of the pack. Which is the only thing that you can build objective benchmarks around. The fact that people on LM arena prefer the output has no relationship to how intelligent the model actually is.
Gemini 2.5 Pro is excellent. Top model in public benchmarks and soundly beat the alternatives (including all Claudes and that Chinese startup’s flagship) in my company’s internal benchmarks.
I’m no Google lover — in fact I’m usually a detractor due to the overall enshittification of their products — but denying that Gemini tops the pile right now is pure ignorance.
> It's nice to see ideas that I had germinating for decades finally playing out
I'm sure you're a smart person, and probably had super novel ideas but your reply comes across as super arrogant / pretentious. Most of us have ideas, even impressive ones (here's an example - lets use LLMs to solve world hunger & poverty, and loneliness & fix capitalism), but it'd be odd to go and say "Finally! My ideas are finally getting the attention".
A charitable view is that they intended "ideas that I had germinating for decades" to be from their own perspective, and not necessarily spurred inside Google by their initiative. I think that what they stated prior to this conflated the two, so it may come across as bragging. I don't think they were trying to brag.
I don't find it rude or pretentious. Sometimes it's really hard to express yourself in hmm acceptable neutral way when you worked on truly cool stuff. It may look like bragging, but that's probably not the intention. I often face this myself, especially when talking to non-tech people - how the heck do I explain what I work on without giving a primer on computer science!? Often "whenever you visit any website, it eventually uses my code" is good enough answer (worked on aws ec2 hypervisor, and well, whenever you visit any website, some dependency of it eventually hits aws ec2)
It is a lot to expect of readers... It's also explicitly asked of us in this forum. https://news.ycombinator.com/newsguidelines.html. "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
it’s fine for a forum to try to have different expectations than the local cafe - that’s kind of like a host asking their guests to remove shoes before walking into their home. but it doesn’t really change a priori basic facts about good writing.
perhaps this is the appropriate forum to reference pg
It's also natural language though, one can find however much ambiguity in there as they can inject. It hasn't for a single moment come across as pretentious to me for example.
Think of all the tiresome Twitter discussions that went like "I like bagels -> oh, so you hate croissants?".
Did you ride the Santa Cruz shuttle, by any chance? We might have had conversations about this a long while ago. It sounded so exciting then, and still does with AlphaGenome.
I have incredibly mixed feelings on Sundar. Where I can give him credit is really investing in AI early on, even if they were late to productize it, they were not late to invest in the infra and tooling to capitalize on it.
I also think people are giving maybe a little too much credit to Demis and not enough to Jeff Dean for the massive amount of AI progress they've made.
I found it disappointing that they ignored one of the biggest problems in the field, i.e. distinguishing between causal and non-causal variants among highly correlated DNA loci. In genetics jargon, this is called fine mapping. Perhaps, this is something for the next version, but it is really important to design effective drugs that target key regulatory regions.
One interesting example of such a problem and why it is important to solve it was recently published in Nature and has led to interesting drug candidates for modulating macrophage function in autoimmunity: https://www.nature.com/articles/s41586-024-07501-1
Does this get us closer? Pretty uninformed but seems that better functional predictions make it easier to pick out which variants actually matter versus the ones just along for the ride. Step 2 probably is integrating this with proper statistical fine mapping methods?
Yes, but it's not dramatically different from what is out there already.
There is a concerning gap between prediction and causality. In problems, like this one, where lots of variables are highly correlated, prediction methods that only have an implicit notion of causality don't perform well.
Right now, SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome. These types of methods are quite close to Pearl's notion of causal graphs.
> This has existed for at least a decade, maybe two.
Methods have evolved a lot in a decade.
Note how AlphaGenome prediction at 1 bp resolution for CAGE is poor. Just Pearson r = 0.49. CAGE is very often used to pinpoint causal regulatory variants.
When I was restudying biology a few years ago, it was making me a little crazy trying to understand the structural geometry that gives rise to the major and minor grooves of DNA. I looked through several of the standard textbooks and relevant papers. I certainly didn't find any good diagrams or animations.
So out of my own frustration, I drew this. It's a cross-section of a single base pair, as if you are looking straight down the double helix.
Aka, picture a double-strand of DNA as an earthworm. If one of the earthworms segments is a base-pair, and you cut the earthworm in half, and turn it 90 degrees, and look into the body of the worm, you'd see this cross-sectional perspective.
Apologies for overly detailed explanation; it's for non-bio and non-chem people. :)
It's not really just base pairs forcing groove structure. The repulsion of the highly charged phosphates, the specific chemical nature of the dihedral bonds making up the backbone and sugar/base bond, the propensity of the sugar to pucker, the pi-pi stacking of adjacent pairs, salt concentration, and water hydration all contribute.
My graduate thesis was basically simulating RNA and DNA duplexes in boxes of water for long periods of time (if you can call 10 nanoseconds "long") and RNA could get stuck for very long periods of time in the "wrong" (IE, not what we see in reality) conformation, due to phosphate/ 2' sugar hydroxyl interactions.
Jeffhwang is correct, and dekhn is thinking way too hard. If you have any asymmetric planar structure that stacks into a helix into the third dimension there will be a minor groove and a major groove.
I bet the internal pitch is that genome will help deliver better advertisement, like if you are at risk of colon cancer they sell you "colon supplements", its likely they will be able to infer a bit about your personality just with your genome, "these genes are correlated with liking dark humor, use them to promote our new movie"
The functional predictions related to "non-coding" variants are big here. Non-coding regions, referred to as the dark genome, produce regulatory non-coding RNA's that determine the level of gene expression in a given cell type. There are more regulatory RNA's than there are genes. Something like 75% of expression by volume is ncRNA.
It's possible that the "functional" aspect of non-coding RNA exists on a time scale much larger that what we can assay in a lab. The sort of "junk DNA/RNA" hypothesis: the ncRNA part of the genome is material that increases fitness during relative rare events where it's repurposed into something else.
On a millions or billions of year time frame, the organisms with the flexibility of ncRNA would have an advantage, but this is extremely hard to figure out with a "single point in time" view point.
Anyway, that was the basic lesson I took from studying non-coding RNA 10 years ago. Projects like ENCODE definitely helped, but they really just exposed transcription of elements that are noisy, without providing the evidence that any of it is actually "functional". Therefore, I'm skeptical that more of the same approach will be helpful, but I'd be pleasantly surprised if wrong.
You know the corporate screws are coming down hard, when the model (which can be run off a single A100) doesn't get a code release or a weight release, but instead sits behind an API, and the authors say fuck it and copy-paste the entirety of the model code in pseudocode on page 31 of the white paper.
Please Google/Demis/Sergei, just release the darn weights. This thing ain't gonna be curing cancer sitting behind an API and it's not gonna generate that much GCloud revenue when the model is this tiny.
This is a strange take because this is consistent with what Google has been doing for a decade with AI. AlphaGo never had the weights released. Nor has any successor (not muzero, the StarCraft one, the protein folding alphafold, nor any other that could reasonably be claimed to be in the series afaik)
You can state as a philosophical ideal that you prefer open source or open weights, but that's not something deepmind has prioritized ever.
I think it's worth discussing:
* What are the advantages or disadvantages of bestowing a select few with access?
* What about having an API that can be called by anyone (although they may ban you)?
* Vs finally releasing the weights
But I think "behind locked down API where they can monitor usage" makes sense from many perspectives. It gives them more insight into how people use it (are there things people want to do that it fails at?), and it potentially gives them additional training data
All of what you said makes sense from the perspective of a product manager working for a for-profit company trying to maximize profit either today or eventually.
But the submission blog post writes:
> To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future. We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.
And at that point, they're painting this release as something they did in order to "advance scientific research" and because they believe "AlphaGenome can be a valuable resource".
So now they're at a cross-point, is this release actually for advancing scientific research and if so, why aren't they doing it in a way so it actually maximizes advancing scientific research, which I think is the point parent's comment.
Even the most basic principle for doing research, being able to reproduce something, goes out the window when you put it behind an API, so personally I doubt their ultimate goal here is to serve the scientific community.
Edit: Reading further comments it seems like they've at least claimed they want to do a model+weights release of this though (from the paper: "The model source code and weights will also be provided upon final publication.") so remains to be seen if they'll go through with it or not.
To be clear: I agree that opening up model + weights makes it possible for third parties to distill or fine tune
If you look at the frenzy of activity that happened after midjourney became accessible, that was awesome for everyone. Midjourney probably got help running their model efficiently and a ton of progress was quickly made.
I'm pretty sympathetic to a company doing a windowing strategy: prepare the API as a sort of beta release timed with the announcement. Spend some time cleaning up the code for public release (at Google this means ripping out internal dependencies that aren't open source), and then release a reference inference implementation along with the weights.
That's pretty reasonable. I wanted to push back on this idea that "the reason Google isn't dropping model + weights is because the corporate screws are coming down hard"
Google isn't waiting to release the weights so that they can profit from this. It's essentially the first step in the process, and serving via API gives them valuable usage data they they might not get if/when it's open sourced
I feel like this take is missing a sense of balance. You can have a goal of advancing scientific research while also still making money. You don’t have to choose one extreme end of the scale.
I’d argue that the product providing some monetary value for Google will help ensure that this team doesn’t get moved some more profitable project instead. That way they can continue improving this tool and make more tools like it in the future.
The predecessor to this model Enformer, which was developed in collaboration with Calico had a weight release and a source release.
The precedent I'm going with is specifically in the gene regulatory realm.
Furthermore, a weight release would allow others to finetune the model on different datasets and/or organisms.
I think that from a research/academic view of the landscape, building off a mutable API is much less preferred than building of a set of open weights. It would be even better if we had the training data, along with all code and open weights. However, I would take open weights over almost anything else in the current landscape.
If it came to light that somebody found a way to use this API in a way that is harmful to society would you be happy that Google could revoke access? Or unhappy?
This is a real tradeoff of freedom vs _. I agree that I'm not always a fan of Google being the one in control, but I'm much happier that they are even releasing an API. That's not something they did for go! (Of course there was a book written so someone got access)
If it came to light that somebody found a way to use this API in a way that is beneficial to society, would you be happy that Google could revoke access? Or unhappy?
[dead]
For AlphaFold3 (vs. AlphaFold2 which was 100% public), they released the weights if you are an affiliate with an academic institution. I hope they do the same with AlphaGenome. I don't even care about the commercial reasons or licensing fees, it's more of a practical reason that every research institution will have an HPC cluster which is already configured to run deep learning stuff can run these jobs faster than the Google API.
And if they don't, I'm not sure how this will gain adoption. There are tons of well-maintained and established workflows out there in the cloud and on-prem that do all of these things AlphaGenome claim to do very well - many that Google promotes on their own platform (e.g., GATK on GCP).
(People in tech think people in science are like people in tech just jump on the latest fads from BigTech marketing - when it's quite opposite it's all about whether your results/methods will please the reviewers in your niche community)
> The model source code and weights will also be provided upon final publication.
Page 59 from the preprint[1]
Seems like they do intend to publish the weights actually
[1]: https://storage.googleapis.com/deepmind-media/papers/alphage...
Thank you for this. I did not notice this at the end of the paper.
> Once the model is fully released, scientists will be able to adapt and fine-tune it on their own datasets to better tackle their unique research questions.
This is in the press release, so they are going to release the weights.
EDIT: I should have read the paper more thoroughly and been more kind. On page 59, they mention there will be a source and code release. Thank you Deepmind :)
The bean counters rule. There is no corporate vision, no long-term plan. The numbers for the next quarter are driving everything.
I can guarantee you that some smart person actually thinks that the opportunity size is measured as a fraction of the pharmaceutical industry market cap.
Doesn't matter if that person can't convince the bean counters and lawyers that it is in the company's long-term interest to release the "proprietary" data.
I wish there's some breakthrough in cell simulation that would allow us to create simulations that are similarly useful to molecular dynamics but feasible on modern supercomputers. Not being able to see what's happening inside cells seems like the main blocker to biological research.
Molecular dynamics describes very short, very small dynamics, like on the scale of nanoseconds and angstroms (.1nm)
What you’re describing is more like whole cell simulation. Whole cells are thousands of times larger than a protein and cellular processes can take days to finish. Cells contain millions of individual proteins.
So that means that we just can’t simulate all the individual proteins, it’s way too costly and might permanently remain that way.
The problem is that biology is insanely tightly coupled across scales. Cancer is the prototypical example. A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale. And it works the other way too. Big changes like changing your diet gets funneled down to epigenetic molecular changes to your DNA.
Basically, we have to at least consider molecular detail when simulating things as large as a whole cell. With machine learning tools and enough data we can learn some common patterns, but I think both physical and machine learned models are always going to smooth over interesting emergent behavior.
Also you’re absolutely correct about not being able to “see” inside cells. But, the models can only really see as far as the data lets them. So better microscopes and sequencing methods are going to drive better models as much as (or more than) better algorithms or more GPUs.
> A single mutated letter in DNA in a single cell can cause a tumor that kills a blue whale.
Side note: whales rarely get cancer.
https://en.wikipedia.org/wiki/Peto's_paradox
https://www.youtube.com/watch?v=1AElONvi9WQ
Simulating the real world at increasingly accurate scales is not that useful, because in biology - more than any other field - our assumptions are incorrect/flawed most of the time. The most useful thing simulations allow us to do is directly test those assumptions and in these cases, the simpler the model the better. Jeremy Gunawardena wrote a great piece on this: https://bmcbiol.biomedcentral.com/articles/10.1186/1741-7007...
The folks at Arc are trying to build this! https://arcinstitute.org/news/virtual-cell-model-state
STATE is not a simulation. It's a trained graphical model that does property prediction as a result of a perturbation. There is no physical model of a cell.
Personally, I think arc's approach is more likely to produce usable scientific results in a reasonable amount of time. You would have to make a very coarse model of the cell to get any reasonable amount of sampling and you would probably spend huge amounts of time computing things which are not relevant to the properties you care amount. An embedding and graphical model seems well-suited to problems like this, as long as the underlying data is representative and comprehensive.
You may enjoy this, from a top-down experimental perspective (https://www.nikonsmallworld.com/galleries/small-world-in-mot...). Only a few entries so far show intracellular dynamics (like this one: https://www.nikonsmallworld.com/galleries/2024-small-world-i...), but I always enjoy the wide variety of dynamics some groups have been able to capture, like nervous system development (https://www.nikonsmallworld.com/galleries/2018-small-world-i...); absolutely incredible.
Very interesting, thanks.
It's a main aim at DeepMind. I hope they succeed as it could be very useful.
'Seeing' inside cells/tissues/organs/organisms is pretty much most modern biological research.
What's missing feels like the equivalent of a "fast-forward" button for cell-scale dynamics
Why simulate? We can already do it experimentally
In my field, we're always wanting to see what will happen when DNA is changed in a human pancreatic beta cell. We kind of have a protocol for producing things that look like human pancreatic beta cells from human stem cells, but we're not really sure that they are really going to behave like real human pancreatic beta cells for any particular DNA change, and we have examples of cases where they definitely do not behave the same.
You can't see what's going on in most cases.
I believe this is where quantum computing comes in but could be a decade out, but AI acceleration is hard to predict
I wish there were more interest in general in building true deterministic simulations than black boxes that hallucinate and can't show their work.
So very similar approach to Conformer - convolution head for downsampling and transformer for time dependencies. Hmm, surprising that this idea works across application domains.
Soooo... Jurassic Park ?
I don't think DM is the only lab doing high-impact AI applications research, but they really seem to punch above their weight in it. Why is that or is it just that they have better technical marketing for their work?
This one seems like well done research but in no way revolutionary. People have been doing similar stuff for a while...
Agreed, there’s been some interesting developments in this space recently (e.g. AgroNT). Very excited for it, particularly as genome sequencing gets cheaper and cheaper!
I’d pitch this paper as a very solid demonstration of the approach, and im sure it will lead to some pretty rapid developments (similar to what Rosettafold/alphafold did)
They have been at it for a long time and have a lot of resources courtesy of Google. Asking perplexity it says the alphafold 2 database took "several million GPU hours".
It's also a core interest of Demis.
DeepMind/Google does a lot more than the other places that most HN readers would think about first (Amazon, Meta, etc). But there is a lot of excellent work with equal ambition and scale happening in pharma and biotech, that is less visible to the average HN reader. There is also excellent work happening in academic science as well (frequently as a collaboration with industry for compute). NVIDIA partners with whoever they can to get you committed to their tech stack.
For instance, Evo2 by the Arc Institute is a DNA Foundation Model that can do some really remarkable things to understand/interpret/design DNA sequences, and there are now multiple open weight models for working with biomolecules at a structural level that are equivalent to AlphaFold 3.
Other labs are definitely doing amazing work too, but often it's either more niche or less public-facing
In biology, Arc Institute is doing great novel things.
Some pharmas like Genentech or GSK also have excellent AI groups.
Arc have just released a perturbation model btw. If it reliably beats linear benchmarks as claimed it is a big step
https://arcinstitute.org/news/virtual-cell-model-state
Well, they are a Google organization. Being backed by a $2T company gives you more benefits than just marketing.
Money and resources are only a partial explanation. There’s some equally and more valuable companies that aren’t having nearly as much success in applied AI.
There are more valuable companies but there aren't companies with more resources. If apple wanted to turn all their cash pile into something like Google's infrastructure it would still take years
[dead]
"To ensure consistent data interpretation and enable robust aggregation across experiments, metadata were standardized using established ontologies."
Can't emphasize enough about how DNA requires human data curation to make things work, even from day one alignments models were driven based on biological observations. Glad to see UBERON, which represents a massive amount of human insight and data curation of what is for all intents and purposes a semantic-web product (OWL based RDF at the heart) playing a significant role.
Curious how it'll perform when people start fine-tuning on smaller, specialized datasets
this is such an interesting problem. Imagine expanding the input size to 3.2Gbp, the size of human genome. I wonder if previously unimaginable interactions would occur. Also interesting how everything revolves around U-nets and transformers these days.
You would not need much more than 2 megabases. The genome is not one contiguous sequence. It is organized (physically segregated) into chromosomes and topologically associated domains. IIRC 2 megabases is like the 3 sd threshold for interactions between cis regulatory elements / variants and their effector genes.
Even just modeling 3D genome organization or ultra-long-range enhancers more realistically could open up new insights
> Also interesting how everything revolves around U-nets and transformers these days.
To a man with a hammer…
Or to a man with a wheel and some magnets and copper wire...
There are technologies applicable broadly, across all business segments. Heat engines. Electricity. Liquid fuels. Gears. Glass. Plastics. Digital computers. And yes, transformers.
Soon we’ll be able to get the whole genome up on the blockchain. (I thought the /s was obvious)
With the huge jump in RNA prediction seems like it could be a boon for the wave of mRNA labs
Those outside the US at least ...
I've been saying we need a rebranding of mRNA in the USA its coming.
“in situ therapeutics”
Let’s figure out introns pls
When I went to work at Google in 2008 I immediately advocated for spending significant resources on the biological sciences (this was well before DM started working on biology). I reasoned that Google had the data mangling and ML capabilities required to demonstrate world-leading results (and hopefully guide the way so other biologists could reproduce their techniques). We made some progress- we used exacycle to demonstrate some exciting results in protein folding and design, and later launched Cloud Genomics to store and process large datasets for analytics.
I parted ways with Google a while ago (sundar is a really uninspiring leader), and was never able to transfer into DeepMind, but I have to say that they are executing on my goals far better than I ever could have. It's nice to see ideas that I had germinating for decades finally playing out, and I hope these advances lead to great discoveries in biology.
It will take some time for the community to absorb this most recent work. I skimmed the paper and it's a monster, there's just so much going on.
> Sundar is a really uninspiring leader
I understand, but he made google a cash machine. Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B. a 10x profit growth at this scale well, its unprecedented, the numbers are inspiring themselves, that's his job. He made mistakes sure, but he stuck to google's big gun, ads, and it paid off. The transition to AI started late but gemini is super competitive overall. Deepmind has been doing great as well.
Sundar is not a hypeman like Sam or Cook, but he delivers. He is very underrated imo.
Like Ballmer, he was set up for success by his predecessor(s), and didn't derail strong growth in existing businesses but made huge fumbles elsewhere. The question is, who is Google's Satya Nadella? Demis?
Since we're on the topic of Microsoft, I'm sure you'd agree that Satya has done a phenomenal job. If you look objectively, what is Satya's accomplishments? One word - Azure. Azure is #2, behind AWS because Satya's effective and strategic decisions. But that's it. The "vibes" for Microsoft has changed, but MS hasnt innovated at all.
Satya looked like a genius last year with OpenAI partnership, but it is becoming increasingly clear that MS has no strategy. Nobody is using Github Copilot (pioneer) or MS Copilot (a joke). They dont have any foundational models, nor a consumer product. Bing is still.. bing, and has barely gained any market share.
People now days don't understand how genius MS was in the 90s.
Their strategy and execution was insanely good, and I doubt we'll ever see anything so comprehensive ever again.
1. Clear mission statement: A PC in very house.
2. A nationwide training + certification program for software engineers and system admins across all of Microsoft's tooling
3. Programming lessons in schools and community centers across the country to ensure kids got started using MS tooling first
4. Their developer operations divisions was an insane powerhouse, they had an army of in house technical writers creating some of the best documentation that has ever existed. Microsoft contracted out to real software engineering companies to create fully fledged demo apps to show off new technologies, these weren't hello world sample apps, they were real applications that had months of effort and testing put into them.
5. Because the internet wasn't a distribution platform yet, Microsoft mailed out huge binders of physical CDs with sample code, documentation, and dev editions of all their software.
6. Microsoft hired the top technical writers to write books on the top MS software stacks and SDKs.
7. Their internal test labs had thousands upon thousands of manual testers whose job was to run through manual tests of all the most popular software, dating back a decade+, ensuring it kept working with each new build of Windows.
8. Microsoft pressed PC OEMs to lower prices again and again. MS also put their weight behind standards like AC'97 to further drop costs.
9. Microsoft innovated relentlessly, from online gaming to smart TVs to tablets. Microsoft was an early entrant in a ton of fields. The first Windows tablet PC was in 1991! Microsoft tried to make smart TVs a thing before there was any content, or even wide spread internet adoption (oops). They created some of the first e-readers, the first multimedia PDAs, the first smart infotainment systems, and so on and so forth.
And they did all this with a far leaner team than what they have now!
(IIRC the Windows CE kernel team was less than a dozen people!)
There was some innovation - and some good products ( MS office stands out for me ) - however what MS did relentlessly well, as you mentioned, was sales, distribution and developers.
They also leveraged their relationship with Intel to the max - Wintel was a phrase for a reason. Companies like Apple faltered, in part, in the 90's because of hardware disadvantages.
Often their competitors had superior products - but MS still won through - in part helped by their ruthlessly leveraging of synergies across their platforms. ( though as new platforms emerged the desire to maximise synergies across platforms eventually held them back).
That aggressive, Windows everywhere behaviour, is what united it's competitors around things like Java, then Linux and open source in general which stopped MS's march into the data centre, and got regulators involved when they tried to strangle the web.
> the Windows CE kernel team was less than a dozen people!
It showed
CE was a dog and probably a big part of the reason Windows Phone failed. Migrating off of it was a huge distraction and prevented the app platform from being good for a long time. I was at Microsoft and worked on Silverlight for a bit back then.
Windows phone 7's kernel was amazing. It was a complete rewrite from the old kernel and had incredible performance, minimal resource usage, and an amazing power profile.
IMHO the reason for Microsoft's failed phone venture was moving onto the windows kernel and 2xing system requirements.
Really? It’s always felt to me like it was app availability — for all the efforts, the app marketplace was a fraction of a fraction of the competitions’, and much like the network effects in social media, if you can’t catch up quickly, it can be almost impossible to ever do so. Haemorrhaging billions per quarter takes a strong stomach and a long vision, one that’s likely to put any executive’s tenure at risk. Nevertheless, it interesting to think what things might’ve looked like had Microsoft persisted another decade.
> some of the best documentation that has ever existed.
You have got to be kidding. The 90s was my heyday, and Microsoft documentation was extravagantly unhelpful, always.
Compared to today's documentation it is amazing.
One of my internships was at a company writing an example app for SQL server offline replication. Taking a DB that had changed while offline and syncing them to a master DB when reconnection happened. (Back in 2004 or so, now days this is an easier thing).
The company I interned at was hired by MSFT to write a sample app for Fabrikam Fine Furniture that did the following:
1. Sales people on the floor could draw a floorplan on a tablet PC of a desired sectional couch layout and the pieces would be identified and the order automatically made up .
2. Customer enters their delivery info on the tablet.
3. DB replicated down to the delivery driver's tablet PC when the driver next pulls into the loading bay with all the order info.
4. After the delivery is finished and signed for on the tablet PC, the customer's signature is digitally signed so it cannot be tampered with later.
5. When the delivery truck pulls back into the depot, SQL server replication happens again, syncing state changes from the driver back to the master DB.
That is an insane sample app, just one of countless thousands that Microsoft shipped out. Compare that to the bare bones hello world samples you get now days.
> Azure is #2, behind AWS because Satya's effective and strategic decisions
I am going to have to disagree with this. Azure is number 2, because MS is number 1 in business software. Cloud is a very natural expansion for that market. They just had to build something that isn't horrible and the customers would have come crawling to MS.
You could just as easily make the argument that cloud is a very natural expansion for Google given their expertise in datacenters and cloud software infrastructure, but they are still behind. Satya absolutely deserves credit for Microsoft's success here.
I just listened to the Acquired podcast guys talk to Balmer. Steve actually deserves a huge amount of the credit for Azure that Satya enjoys today.
- Created the windows server product
- Created the "rent a server" business line
- Identified the need for a VM kernel and hired the right people
- Oversaw MSFT's build out of web services (MSN, Xbox Live, Bing) which gave them the distributed systems and uptime know-how
- Picked Satya to take over Azure, and then to succeed him
No, you couldn't. The natural extension is related to customer relationships, familiarity, lock in (somewhat).
Google is not behind capability wise, they are in front of MSFT actually. The customer relationships matter a whole lot more.
[dead]
Microsoft has become a lot more friendly to open source under Satya. VSCode, GitHub, and WSL happened during his tenure, and probably wouldn't have happened under Ballmer. Turning the ship from a focus on protecting platform lock-in to meeting developers where they are is a huge accomplishment IMO.
> Microsoft has become a lot more friendly to open source under Satya. True, but that's just few open source projects, albeit influential ones. There are soo many other companies doing influential open source projects.
I dont disagree with anything you said because turning a ship around is hard. But hand-to-heart, what big tech company is truly innovating to the future. Lets look at each company.
Apple - bets are on VR/AR. Apple Car is dead. So it is just Vision Pro
Amazon - No new bets. AWS is printing money, but nothing for the future.
Microsoft - No new bets. They fumbled their early lead in AI.
Google - Gemini, Waymo ..
I think Satya gets a lot more coverage than his peer at Google.
Waymo and DeepMind and the TPU program all predate Sundar as CEO.
IMO Google should have invested more in Waymo and scaled sooner. Instead they partnered with traditional automakers and rideshare companies, sought outside investment, and prioritized a prestige launch in SF over expanding as fast as possible in easier markets.
In other areas they utterly wasted huge initial investments in AR/VR and robotics, remain behind in cloud, and Google X has been a parade of boondoggles (excluding Waymo which, again, predates Sundar and even X itself).
You could also argue that they fumbled AI, literally inventing the transformer architecture but failing at building products. Gemini 2.5 Pro is good, but they started out many years ahead and lost their lead.
Apple - have you used a Macbook recently - their ARM based product line is a big step forward - sure it's not self driving cars - but it's been the biggest jump in standard PC's for quite a while and has required innovation up and down the stack.
Microsoft - No new bets. Really? Their OpenAI deal and integrating that tech into core products?
Amazon - No new bets? It's still trying drone delivery, and it's also got project Kuiper - moving beyond data centres to providing the network
> a lot more friendly to open source under Satya. VSCode, GitHub, and WSL
This is all the 1st step of embrace and extinguish.
Diversifying Microsoft away from the traditional cash cow of Windows and Office is the single most important strategy for Microsoft and he executed it well.
His genius is really just making good bets on people, and letting them do their thing.
People like Scott Guthrie who was a key person behind dot.net, and went on to be the driving force behind Azure. Anyone who did any dot.net work 10+ years ago would know the ScottGu blog and his red shirt.
Google similarly bet on Demis, and the results also show. For someone who got his start doing level design on Syndicate (still one of my all-time favourite games) he's come a long way.
> If you look objectively, what is Satya's accomplishments?
Managing to keep the MS Office grift going and even expand it with MS Teams is something
This is kind of bullshit. One can equally say Satya was setup for success by Ballmer as he stepped away graciously taking all the blame so new CEO can start unencumbered.
> who is Google's Satya Nadella? Demis?
100% it's Demis.
A Demis vs. Satya setup would be one for the ages.
Demis has the best story arc. The path from bullfrog and lionhead games to the tip of the spear in biological research. You can't make this up
He's also happens to be a really nice guy in person.
He might have delivered a lot of revenue growth yea, but Google culture is basically gone. Internally we're not very far from Amazon style "performance management"
To upper management types that’s a feature not a bug.
He delivered revenue growth by enshittifying Goog's products. Gemini is catching up because Demis is a boss and TPUs are a real competitive advantage.
You either attribute both good and bad things to the CEO, or dont. If enshittifying is CEO's fault, then so is Gemini's success.
Why? We've all seen organizations in which some things happen because of the CEO, and others happen in spite of them.
But you don’t just get to pick which is which willy nilly just to push your opinions
Right, of course, but I don't see any evidence from which to assume that they're picking "willy nilly."
Read back what you just wrote. It is literally "willy nilly".
"Somethings are because of CEO, and some things are in spite of CEO"
And it was "willy nilly" attributed that enshittification was because of CEO (how do we know? maybe it was CFO, or board) and Gemini because of Demis (how do we know? maybe it was CEO, or CFO, or Demis himself).
You're misunderstanding what he's saying. He's saying Google has started enshittifing products and Sundar gets the blame for that. Sundar is also the CEO so he gets credit for Gemini. Google's playbook is enshittification though and if Gemini ever gets a big enough moat, it will be enshittified. Even Gemini 2.5 Pro has gotten worse for me with the small updates and it's not as good when it first launched. Google topped the benchmarks and then made it worse.
at the very least, enshittification is a company policy and gemini is a specific product.
I guess I don't understand why you so strongly believe that CuriouslyC's comment reflects an uninformed opinion without any basis in fact.
I see somebody saying something on here, I tend to assume that they have a reason for believing it.
If your opinions differ from theirs, you could talk about what you believe, instead of incorrectly saying that a CEO can only be responsible for everything or nothing that a company does.
Not really, pressure to move into AI is so vast that it in reality the CEO had little saying about moving into it or not, and they already had smart employees to make it a reality, vastly different that what happened with enshitification which Gemini is part of, just recently people were complaining that the turn off button was hijacked to start Gemini in their Android phones.
Demis reports to Sundar. All of Demis's decisions would have been vetted by and either approved, rejected, or refined by Sundar. There's no way to actually distinguish how much of the value was from whom, unless you have inside info.
The Nobel Committee seemed fairly sure who was responsible for what around those parts.
> Last quarter BEFORE he was CEO in 2015, google made a quarterly profit of around 3B. Q1 2025 was 35B.
Google's revenue in 2014 was $75B and in 2024 it was $348B, that's 4.64 times growth in 10 years or 3.1 times if corrected for the inflation.
And during this time, Google failed to launch any significant new revenue source.
Tim Cook is the opposite of a hypeman.
I like that you are writing as a defense of Google and Sundar.
Their brand is almost cooked though. At least the legacy search part. Maybe they'll morph into AI center of the future, but "Google" has been washed away.
World is much.. much bigger than HN bubble. Last year, we were all so convinced that Microsoft had it all figured out, and now look at them. Billion is a very, very large number, and sometimes you fail to appreciate how big that is.
Oh I'm conveying opinions other than mines, tech people I work with, that are very very removed from the HN mindset actually, were shitting on google search for a long time this week.
Google ads are still everywhere, if you google or not.
The question will be, when and how will the LLM's be attacked with product placements.
Open marked advertisement in premium models and integrated ads in free tier ones?
I still hope for a mostly adfree world, but in reality google seems in a good position now for the transition towards AI (with ads).
Maybe they'll morph into AI center of the future
Haven't you been watching the headlines here on HN? The volume of major high-quality Google AI releases has been almost shocking.
And, they've got the best data.
who didn't ? I meant in the future, if this becomes a long term fruitful economic value (sorry but video and image generation have no value, it's laughable and used for cheap needs, and most of the time people are very annoyed by it).
> The transition to AI started late but gemini is super competitive overall.
If by competitive you mean "We spent $75 Billion dollars and now have a middle of the pack model somewhere between Anthropic and Chinese startup", that's a generous way to put it.
Citation needed. Gemini 2.5 pro is one of the best models there is right now, and it doesn't look like they're slowing down. There is a LLM response to basically every single Google search query, it's built into the billions of android phones etc. They're winning.
By competitive, i mean no.1 in LM arena overall, in webdev, in image gen, in grounding etc. Plus, leading the chatbot arena ELO. Flash is the most used model in openrouter this month as well. Gemma models are leading on device stats as well. So yes, competitive
Except coding, where it’s essentially middle of the pack. Which is the only thing that you can build objective benchmarks around. The fact that people on LM arena prefer the output has no relationship to how intelligent the model actually is.
Gemini 2.5 Pro is excellent. Top model in public benchmarks and soundly beat the alternatives (including all Claudes and that Chinese startup’s flagship) in my company’s internal benchmarks.
I’m no Google lover — in fact I’m usually a detractor due to the overall enshittification of their products — but denying that Gemini tops the pile right now is pure ignorance.
Nice wow 20% of the credit goes to you for thinking of this years ago. Kudos
> It's nice to see ideas that I had germinating for decades finally playing out
I'm sure you're a smart person, and probably had super novel ideas but your reply comes across as super arrogant / pretentious. Most of us have ideas, even impressive ones (here's an example - lets use LLMs to solve world hunger & poverty, and loneliness & fix capitalism), but it'd be odd to go and say "Finally! My ideas are finally getting the attention".
A charitable view is that they intended "ideas that I had germinating for decades" to be from their own perspective, and not necessarily spurred inside Google by their initiative. I think that what they stated prior to this conflated the two, so it may come across as bragging. I don't think they were trying to brag.
I don't find it rude or pretentious. Sometimes it's really hard to express yourself in hmm acceptable neutral way when you worked on truly cool stuff. It may look like bragging, but that's probably not the intention. I often face this myself, especially when talking to non-tech people - how the heck do I explain what I work on without giving a primer on computer science!? Often "whenever you visit any website, it eventually uses my code" is good enough answer (worked on aws ec2 hypervisor, and well, whenever you visit any website, some dependency of it eventually hits aws ec2)
100% but in this case they uh… didn’t work on it, it seems?
FWIW, I interpreted more as "This is something I wanted to see happen, and I'm glad to see it happening even if I'm not involved in it."
That's correct. I can't even really take credit for any of the really nice work, as much as I wish I could!
Could be either. Nevertheless, while tone is tricky in text, the writer is responsible for relieving ambiguity.
eliminating ambiguity is impossible. the reader should work to find the strongest interpretation of the writer's words
that’s a lot to expect of readers… good writing needs to give readers every opportunity to find the good in it.
It is a lot to expect of readers... It's also explicitly asked of us in this forum. https://news.ycombinator.com/newsguidelines.html. "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
fair point
it’s fine for a forum to try to have different expectations than the local cafe - that’s kind of like a host asking their guests to remove shoes before walking into their home. but it doesn’t really change a priori basic facts about good writing.
perhaps this is the appropriate forum to reference pg
https://paulgraham.com/writing44.html
https://paulgraham.com/essay.htm
It's also natural language though, one can find however much ambiguity in there as they can inject. It hasn't for a single moment come across as pretentious to me for example.
Think of all the tiresome Twitter discussions that went like "I like bagels -> oh, so you hate croissants?".
From Marx to Zizek to Fukuyama^1, 200 years of Leftist thinking nobody has ever came close to say "we can fix capitalism".
What makes you think that LLMs can do it?
[1] relapsed capitalist, at best, check the recent Doomscroll interview
Yeah it comes off as braggy, but it’s only natural to be proud of your foresight
Natural? Sure. Deserved? Not really, not unless we’re also forthcoming in our lack of foresight and the times we plainly got it wrong.
[dead]
[dead]
It's easy to forget how early some of these ideas were being pushed internally
Did you ride the Santa Cruz shuttle, by any chance? We might have had conversations about this a long while ago. It sounded so exciting then, and still does with AlphaGenome.
Googler here ---^
I have incredibly mixed feelings on Sundar. Where I can give him credit is really investing in AI early on, even if they were late to productize it, they were not late to invest in the infra and tooling to capitalize on it.
I also think people are giving maybe a little too much credit to Demis and not enough to Jeff Dean for the massive amount of AI progress they've made.
I found it disappointing that they ignored one of the biggest problems in the field, i.e. distinguishing between causal and non-causal variants among highly correlated DNA loci. In genetics jargon, this is called fine mapping. Perhaps, this is something for the next version, but it is really important to design effective drugs that target key regulatory regions.
One interesting example of such a problem and why it is important to solve it was recently published in Nature and has led to interesting drug candidates for modulating macrophage function in autoimmunity: https://www.nature.com/articles/s41586-024-07501-1
Does this get us closer? Pretty uninformed but seems that better functional predictions make it easier to pick out which variants actually matter versus the ones just along for the ride. Step 2 probably is integrating this with proper statistical fine mapping methods?
Yes, but it's not dramatically different from what is out there already.
There is a concerning gap between prediction and causality. In problems, like this one, where lots of variables are highly correlated, prediction methods that only have an implicit notion of causality don't perform well.
Right now, SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome. These types of methods are quite close to Pearl's notion of causal graphs.
> SOTA seems to use huge population data to infer causality within each linkage block of interest in the genome.
This has existed for at least a decade, maybe two.
> There is a concerning gap between prediction and causality.
Which can be bridged with protein prediction (alphafold) and non-coding regulatory predictions (alphagenome) amongst all the other tools that exist.
What is it that does not exist that you "found it disappointing that they ignored"?
> This has existed for at least a decade, maybe two.
Methods have evolved a lot in a decade.
Note how AlphaGenome prediction at 1 bp resolution for CAGE is poor. Just Pearson r = 0.49. CAGE is very often used to pinpoint causal regulatory variants.
Just add startofficial intel.
[dead]
[dead]
[dead]
Naturally, the (AI-generated?) hero image doesn't properly render the major and minor grooves. :-)
When I was restudying biology a few years ago, it was making me a little crazy trying to understand the structural geometry that gives rise to the major and minor grooves of DNA. I looked through several of the standard textbooks and relevant papers. I certainly didn't find any good diagrams or animations.
So out of my own frustration, I drew this. It's a cross-section of a single base pair, as if you are looking straight down the double helix.
Aka, picture a double-strand of DNA as an earthworm. If one of the earthworms segments is a base-pair, and you cut the earthworm in half, and turn it 90 degrees, and look into the body of the worm, you'd see this cross-sectional perspective.
Apologies for overly detailed explanation; it's for non-bio and non-chem people. :)
https://www.instagram.com/p/CWSH5qslm27/
Anyway, I think the way base pairs bond forces this major and minor grove structure observed in B-DNA.
It's not really just base pairs forcing groove structure. The repulsion of the highly charged phosphates, the specific chemical nature of the dihedral bonds making up the backbone and sugar/base bond, the propensity of the sugar to pucker, the pi-pi stacking of adjacent pairs, salt concentration, and water hydration all contribute.
My graduate thesis was basically simulating RNA and DNA duplexes in boxes of water for long periods of time (if you can call 10 nanoseconds "long") and RNA could get stuck for very long periods of time in the "wrong" (IE, not what we see in reality) conformation, due to phosphate/ 2' sugar hydroxyl interactions.
Jeffhwang is correct, and dekhn is thinking way too hard. If you have any asymmetric planar structure that stacks into a helix into the third dimension there will be a minor groove and a major groove.
For anyone wondering: https://www.mun.ca/biology/scarr/MGA2_02-07.html
Maybe they were depicting RNA? (probably not)
No; what they drew doesn't look like real DNA or (duplex double stranded) RNA. Both have differently sized/spaced grooves (see https://www.researchgate.net/profile/Matthew-Dunn-11/publica...).
At least they got the handedness right.
when a human does it, its style! when ai does it, you cry about your job.
And yet still manages to be 4MB over the wire.
That's only on high-resolution screens. On lower resolution screens it can go as low as 178,820 bytes. Amazing.
Maybe "Release" requires a bit more context, as it clearly means different things to different people:
> AlphaGenome will be available for non-commercial use via an online API at http://deepmind.google.com/science/alphagenome
So, essentially the paper is a sales pitch for a new Google service.
I bet the internal pitch is that genome will help deliver better advertisement, like if you are at risk of colon cancer they sell you "colon supplements", its likely they will be able to infer a bit about your personality just with your genome, "these genes are correlated with liking dark humor, use them to promote our new movie"
Cant await for people to use it for CRISPR an it hallucinate some weird mutation