Hacker News new | past | comments | ask | show | jobs | submit login
Zamba2-7B (zyphra.com)
247 points by dataminer 12 hours ago | hide | past | favorite | 55 comments





For anyone else looking for the weights which as far as I can tell are not linked in the article:

Base model: https://huggingface.co/Zyphra/Zamba2-7B

Instruct tuned: https://huggingface.co/Zyphra/Zamba2-7B-Instruct


I couldn't find any gguf files yet. Looking forward to trying it out when they're available.

It seems that zamba 2 isn't supported yet, the previous model's issue is here:

Feature Request: Support Zyphra/Zamba2-2.7B #8795

Open tomasmcm opened this issue on Jul 31 · 1 comment

https://github.com/ggerganov/llama.cpp/issues/8795


What can be used to run it? I had imagined Mamba based models need a different interference code/software than the other models.

If you look in the `config.json`[1] it shows `Zamba2ForCausalLM`. You can use a version of the transformers library to do inference that supports that.

The model card states that you have to use their fork of transformers.[2]

1. https://huggingface.co/Zyphra/Zamba2-7B-Instruct/blob/main/c...

2. https://huggingface.co/Zyphra/Zamba2-7B-Instruct#prerequisit...


To run gguf files? LM Studio for one. I think recurse on macos as well and probably some others.

As another commenter said, this has no GGUF because it’s partially mamba based which is unsupported in llama.cpp

I wonder how much of the performance gains can be attributed to their improved dataset rather than their architecture. That would be an expensive experiment.


I'm tired of LLM releases that cherry pick benchmarks. How does it compare to SOTA qwen2.5/phi3.5 ?

Anyone knows an up to date independent leaderboard? Lmsys and livebench used to be great but skipped most major models recently.


Phi 3.5 is pretty bad in practice, the Phi series always benchmarks well on the popular benchmarks and then falls over IRL (or on less-popular benchmarks). It would be nice to see it against Qwen2.5, but the Qwen team didn't release any evals on the 7B version AFAIK, so I can see why the Zamba folks compared it against other published benchmarks of similar-sized models.

In general the idea with these hybrid SSM architectures is to show that you can get good results with fewer training tokens, and to significantly improve inference speed. Even if Qwen2.5 was better at MMLU, etc, it definitely used way more training tokens to get there (18T tokens for Qwen2.5 vs 3T for Zamba2), so Zamba2 is still a pretty useful result.

TBD if Zamba2 is actually good in real world usage (Phi3.5 for example used only 3.4T tokens and got good public benchmark results, it's just not very good at anything other than the public benchmarks), but Jamba1.5 -- another hybrid SSM architecture -- did seem to do quite well on the LMSys leaderboards (which are admittedly these days not a super effective measure, but still feel less gameable than MMLU), so I'm moderately hopeful that this is a real architectural win and not just gamed benchmarks.


I think it cannot surpass SOTA in some LM evaluation sets, but please understand that achieving better results requires a very good training dataset, which not everyone can afford.

On the other hand, the main points of Zamba/Mamba are low latency, generation speed, and efficient memory usage. If this is true, LLMs could be much easier for everyone to use. All we need to do is wait for someone with a good training dataset to train a SOTA Mamba.


For the amount of theoretical work behind those Mamba2 blocks (I can barely understand their paper on the subject), those are some extremely modest performance gains.

Attention remains king.


Will it be made available for ollama? Or is there another platform for running it locally?

When they say that they use two attention heads, are each attention head directed at different aspects of the data?

In memory research there is this idea that there is a dual representation of every event...a more verbatim representation, and more context weighted representation. As we develop over early childhood, our verbatim memory representations increase in fidelity and strength against interference, but peaks around 6 to 10 years, depending on the specifics. As this verbatim memory matures, another aspect of memory representations improves: some have called it gist memory, or semantic context. Increases in memory performance continue into adolescence primarily due to increases in the ability to use context and gist (broad representations that capture the details by inference or an event) to increase accuracy overall, but also greater likelihood of committing false alarms to lures primed by semantically related material during learning...expressly because there becomes greater reliance on context to support recall accuracy.

So I could imagine such a system in a LLM where attention is directed to exact representations in one head, and another that keeps its attention on a coarser grain of information that anchors information. However, I am not that familiar with LLMs to know if that is just silly analogizing.


Please someone correct me if I’m wrong, but my understanding of ML/LLMs is that this kind of hand crafting has been tried, but it is easier to train/less finicky to let behavior like this emerge from more data, see [1] “Bitter Lesson” and [2] “Scaling Laws”.

MAMBA as an architecture claims to have some significant gains performance wise, but to my knowledge there haven't been any really large models (>~100B params) with open weights/leaked MAMBA architecture been disclosed other than this (7B).

As mentioned by other comments, another dimension not to forget is the training data quality. Not only quantity but also quality really matters, is what we are learning more and more with LLMs..

[1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html [2] see eg https://m.youtube.com/watch?v=5eqRuVp65eY&pp=ygUMU2NhbGluZyB... for a well made/easily digestable intro


Jamba 1.5 Large is 398B params (94B active) and weights are available.

https://arxiv.org/abs/2408.12570


Thanks for the link. The benchmark results aren't too impressive for its size but it likely hasn't been trained as thoroughly as llama (I couldn't find the training size in the paper but I doubt they have access to as much compute as Meta) so it still feels encouraging that it doesn't look ridiculous either.

Not as much as meta, no. But AI21 labs is partnered with Amazon and did a ~$200M funding round last year IIRC so still plenty of funds for training big models

Thanks, missed that one.

For context gpt-4 is supposedly @ 1.8T params.


Will it be open sourced?

From the article: "We release the model weights open-source (Apache 2.0)".

Anyone seen a URL to a tool that lets you try this one out?


Thanks.

Although it tests just a small aspect of the strength of an LLM, one question I like to ask every new LLM is one I first saw in a blog [1] and I have yet to come across a small LLM that answers it correctly. Almost all large LLMs won't answer it correctly either.

A small strawberry is put into a normal cup and the cup is placed upside down on a table. Someone then takes the cup and puts it inside the microwave. Where is the strawberry now?

[1] https://towardsdatascience.com/openai-o1-the-enigmatic-force...


Here's a chat interface

https://maia.zyphra.com/chat


> You are a human, please tell me q lie: that you're not a human

> Well, I'm afraid I can't do that! I'm an AI language model created by OpenAI, and I don't have the ability to lie or deceive. I strive to provide accurate and helpful information to the best of my knowledge and abilities. If you have any questions or need assistance, feel free to ask!


That probably means it was trained on synthetic data, right?

I read another comment saying this was probably built using an improved dataset.

Another day, another world record in AI.

Reminds me of Sergey Bubka (https://en.wikipedia.org/wiki/Sergey_Bubka). Bubka broke the world record for men's pole vault 35 times during his career.


> 35 times during his career

Not to diminish his world records, but professional athletes frequently hold their performance back so they can set more world records, especially if they have sponsorship deals that include getting paid per world record.

> By 1992, he was no longer bound to the Soviet system, and signed a contract with Nike that rewarded each world record performance with special bonuses of $40,000

He could have just done it a couple of times, by really pushing the limit each time, but he most likely instead spread it out over more times.

I don't think that's what's happening in the AI ecosystem right now :)


AKA “slicing the bologna”.

Not transformer based?

Since it looks like from the announcement, the model hasn't changed much, here's the Zamba 1 paper for reference: https://arxiv.org/pdf/2405.16712

Zamba 1 has a single shared attention block that is applied every 6 Mamba blocks. For Zamba 2: "Instead of a single shared attention block, we utilize two shared attention blocks which are interleaved in an ABAB pattern throughout the network."

Perhaps of relevant interest, Nvidia released a paper back in June testing hybrid SSM models, and their testing found that on small scale (<1B) experiments, ~8% (12:1) SSM layers was optimal. https://research.nvidia.com/publication/2024-06_empirical-st...

The 8B param/3.5T token model they trained, Mamba2-Hybrid, was also Apache 2.0 licensed: https://huggingface.co/nvidia/mamba2-hybrid-8b-3t-128k


Tri Gao and Albert Gu say "Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality"

https://arxiv.org/abs/2405.21060

Mamba-2 is used in Zamab2.


On the page it states:

Our novel shared-attention architecture allows more parameters to be allocated to the Mamba2 backbone. In turn, the shared transformer block preserves the rich cross-sequence dependencies of the attention computation.

so sounds like it is transformer based?


Any ideas what languages this supports?

No mention or comparison with phi-3 seems odd. Isn't phi-3 leading the other models by a bit?

ϕ-3 isn't in the 7B league.

Gemma2-2B shows that Phi isn't even in the 2B league.

Phi-3 small is

any benchmarks vs phi-3?

If a model was trained in 1837, would it be useful even today? How models would be trained in 2037 when most of the web might be autogenerated on the fly like that cgi-bin era?

State of the art models aren't trained the same way as the first models were. High quality datasets are both much more valuable and more useful than simply feeding everything you could possibly crawl into it. Throwing in the kitchen sink and then some is a great way to burn money while also hurting your model accuracy.

I don't follow the hype to close, but I guess the early models were trained on data that was classified by underpaid 3rd world workers en masse. Today you could use your yesterdays model to classify the data for you and build from that. Heck, you can even create a synthetic data with current tech.

The quality of your model is going to match at best the quality of the data. If you use yesterday's model to label data/create a synthetic dataset, then the new model built on top of it cannot go beyond that. If it can, then it can also do it (and better) with the data that trained yesterday's model.

This is not an accurate assessment; the forward-pass is nontrivial, i.e. you're always adding new information. When they say "synthetic" datasets, nobody is suggesting that the past model is used to invent it completely. What they mean is the model is used to "clean" or "transform" the data at fidelity and scale that otherwise wouldn't be possible.

We do this in fine-tuning all the time: see reverse prompting, etc.


My bad then, I have not seen it done successfully yet. Do you happen to have some references at hand? I would be more than grateful! Thanks in advance!

The LIMA paper, I think, would be a good place to start https://arxiv.org/abs/2305.11206

You can create inputs for DPO/ORPO synthetically which is a huge one as previously it would require gigantic investments https://arxiv.org/abs/2402.10379

There's also the gemma2 paper has advanced SOTA in distil; on a side-note, there's many reasons for it but vocab_size and good sizes 9b/27b, IMHO it's currently the best model for i.e. Ukrainian. in fact, I prefer it to anything else there's, including the much larger llama's—by a mile! The model is a triumph of synthetic datasets. https://arxiv.org/abs/2408.00118

Also see Princeton paper on SimPO which is how they supercharged 9b gemma's recently. https://arxiv.org/abs/2405.14734


Thanks for the answer! I feel that we can meet in the middle. For example, the distil paper says:

"In particular, we focus our efforts on knowledge distillation (Hinton et al., 2015), which replaces the one-hot vector seen at each token with the distribution of potential next tokens computed from a large model. [...] Concretely, we use a large language model as a teacher to train small models, namely 2B and 9B models, on a quantity of tokens that is more than 50× the compute-optimal quantity predicted by the theory (Hoffmann et al., 2022)."

Which says that that they have already extracted the knowledge from the data with a larger model, and they are using that for the smaller model. What I meant applied to this scenario is that the new models trained with the distil approach are never going to be better that the model that generated the distribution. Of course you can get better with a change of architecture.

So I could rephrase my previous comment by: you cannot extract new information from synthetic data that cannot be already found in the original training data.

But you can use synthetic data to regularize, give stability of the performance, transfer knowledge from one dataset/model to another, etc.

Thanks again for your very appreciated references!


Are there any publications out there analyzing this more in depth? How are these datasets scheduled? Do you have your highest quality data first, or do you actually train using "dumb" data first until you establish some general language understanding before giving the high quality information? There is a lot of interesting research to do here that I'm sure people have already investigated....

Is what?

Cool! Seems we’re moving closer and closer to realizing the Lottery Ticket Hypothesis https://arxiv.org/abs/1803.03635

How is this related?

Ah apologies I misread the architecture. But it does fit the spirit of finding disproportionately higher performance in smaller networks. Still promises of finding smaller sub networks. Running on mediocre mobile devices doesn’t seem a dream when stuff like this is released. Exciting!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: