Instructions to use Delta-Vector/Archaeo-32B-KTO with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Delta-Vector/Archaeo-32B-KTO with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Delta-Vector/Archaeo-32B-KTO") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Delta-Vector/Archaeo-32B-KTO") model = AutoModelForCausalLM.from_pretrained("Delta-Vector/Archaeo-32B-KTO") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Delta-Vector/Archaeo-32B-KTO with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Delta-Vector/Archaeo-32B-KTO" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Archaeo-32B-KTO", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Delta-Vector/Archaeo-32B-KTO
- SGLang
How to use Delta-Vector/Archaeo-32B-KTO with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Delta-Vector/Archaeo-32B-KTO" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Archaeo-32B-KTO", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Delta-Vector/Archaeo-32B-KTO" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Archaeo-32B-KTO", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Delta-Vector/Archaeo-32B-KTO with Docker Model Runner:
docker model run hf.co/Delta-Vector/Archaeo-32B-KTO
__~a~_
~~; ~_
_ ~ ~_ _
'_\;__._._._._._._] ~_._._._._._.__;/_`
'(/'/'/'/'|'|'|'| ( )|'|'|'|'\'\'\'\)'
(/ / / /, | | | |(/ \) | | | ,\ \ \ \)
(/ / / / / | | | ~(/ \) ~ | | \ \ \ \ \)
(/ / / / / ~ ~ ~ (/ \) ~ ~ \ \ \ \ \)
(/ / / / ~ / (||)| ~ \ \ \ \)
~ / / ~ M /||\M ~ \ \ ~
~ ~ /||\ ~ ~
//||\\
//||\\
//||\\
'/||\' "Archaeopteryx"
Support me on Ko-Fi: https://ko-fi.com/deltavector
A series of Merges made for Roleplaying & Creative Writing, This model is a RL train ontop of Archaeo. A merge using Hamanasu-Magnum & Kunou, Trained with Axolotl on 8xH200s.
ChatML formatting
"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
Axolotl Configuration
base_model: ./modelplugins:
- axolotl.integrations.liger.LigerPlugin
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: true cut_cross_entropy: false
load_in_8bit: false load_in_4bit: false strict: false
rl: kto kto_undesirable_weight: 1.0
datasets:
- path: Delta-Vector/Tauri-Opus-Accepted-GPT-Rejected-Opus-Writing-Prompts split: train type: chatml.argilla
- path: Delta-Vector/Tauri-IFeval-Dans-Tulu-KTO split: train type: chatml.argilla
- path: Delta-Vector/Tauri-KTO-Instruct-Mix split: train type: chatml.argilla
- path: Delta-Vector/Tauri-Purpura-Arkhaios-CC-KTO split: train type: chatml.argilla dataset_prepared_path: last_run_prepared val_set_size: 0.0 output_dir: ./archaeo-kto-v2 remove_unused_columns: false
#@lora_mlp_kernel: true #lora_qkv_kernel: true #lora_o_kernel: true
adapter: lora lora_model_dir:
sequence_len: 8192 pad_to_sequence_len: false
lora_r: 64 lora_alpha: 32 lora_dropout: 0.0 lora_target_linear: true lora_fan_in_fan_out: lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
wandb_project: Francois-V2 wandb_entity: wandb_watch: wandb_name: Archaeo-32b-KTO wandb_log_model:
gradient_accumulation_steps: 4 micro_batch_size: 4 num_epochs: 1 optimizer: paged_ademamix_8bit lr_scheduler: constant_with_warmup learning_rate: 5e-6 max_grad_norm: 0.001
train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true
gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true
warmup_steps: 100 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: ./deepspeed_configs/zero3_bf16.json weight_decay: 0.0025 fsdp: fsdp_config:
Quants:
Credits
Thank you to: Kubernetes-bad, LucyKnada, Kalomaze, Alicat, Intervitens, Samantha Twinkman, Tav, Trappu & The rest of Anthracite
- Downloads last month
- 7