Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
embedl
/
Llama-3.2-3B-Instruct-FlashHead-W4A16
like
4
Follow
Embedl
22
Safetensors
flash_head_llama
text-generation-inference
custom_code
compressed-tensors
License:
embedl-models-community-licence-1.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Llama-3.2-3B-Instruct-FlashHead-W4A16
3.25 GB
2 contributors
History:
15 commits
swaze
Upload 3 files
006a2e6
verified
22 days ago
assets
Upload folder using huggingface_hub
about 1 month ago
flash_head_assets
Upload folder using huggingface_hub
about 1 month ago
.gitattributes
1.63 kB
Upload folder using huggingface_hub
about 1 month ago
README.md
6.1 kB
Update README.md
30 days ago
chat_template.jinja
3.83 kB
Upload folder using huggingface_hub
about 1 month ago
config.json
2.41 kB
Upload 3 files
22 days ago
configuration_flash_head_llama.py
73 Bytes
Upload 3 files
22 days ago
generation_config.json
184 Bytes
Upload folder using huggingface_hub
about 1 month ago
model.safetensors
3.18 GB
xet
Upload folder using huggingface_hub
about 1 month ago
modeling_flash_head_llama.py
78 Bytes
Upload 3 files
22 days ago
recipe.yaml
474 Bytes
Upload folder using huggingface_hub
about 1 month ago
special_tokens_map.json
296 Bytes
Upload folder using huggingface_hub
about 1 month ago
tokenizer.json
17.2 MB
xet
Upload folder using huggingface_hub
about 1 month ago
tokenizer_config.json
50.5 kB
Upload folder using huggingface_hub
about 1 month ago