CohereLabsWayy Research

Aetheris Playground

A hybrid Mamba-MoE multilingual model — 800M parameters distilled from tiny-aya-global (3.35B) with 4.2x compression. Chat in any of 67 languages and it responds in kind.

4.2x compression 800M parameters 3.1x faster inference 67 languages 24 hybrid layers (SSM+MoE)
🟡 Early preview — Aetheris is running live inference from a Stage 3 SFT checkpoint (step 500/5000). Output will be mostly gibberish until training progresses further. Expect ~3 sec/token on CPU. Try short prompts!

Chat with Aetheris in any language — it detects your language and responds naturally. Try switching languages mid-conversation to see multilingual capabilities.

0 1
8 512

Click any example to load it: