-
1.58-bit FLUX
Paper • 2412.18653 • Published • 86 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 627 -
BitNet a4.8: 4-bit Activations for 1-bit LLMs
Paper • 2411.04965 • Published • 69 -
BitNet: Scaling 1-bit Transformers for Large Language Models
Paper • 2310.11453 • Published • 105
Seojune Lee
vantaa32
AI & ML interests
None yet
Recent Activity
liked
a Space
about 19 hours ago
wenhanacademia/ai-paper-finder
authored
a paper
3 months ago
QWHA: Quantization-Aware Walsh-Hadamard Adaptation for
Parameter-Efficient Fine-Tuning on Large Language Models