Spaces:
Sleeping
Sleeping
Commit
ยท
8a1a81a
0
Parent(s):
ChatBIA API Server for Hugging Face Spaces
Browse files- FastAPI server with dual AI models (General + BSL)
- Auto-download models from Hugging Face Hub on startup
- Qwen2.5-3B-Instruct-Q4_K_M (General mode)
- ChatBIA-3B-v0.1-Q4_K_M (BSL accounting mode)
- Docker deployment configuration
- CORS support for Android app
- No model files in repo (downloaded from HF Hub)
๐ค Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
- .gitattributes +3 -0
- .gitignore +25 -0
- DEPLOY.md +116 -0
- Dockerfile +26 -0
- README.md +129 -0
- README_HF.md +53 -0
- app.py +200 -0
- main.py +193 -0
- requirements.txt +5 -0
.gitattributes
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
venv/
|
| 8 |
+
env/
|
| 9 |
+
ENV/
|
| 10 |
+
|
| 11 |
+
# Models (ํฐ ํ์ผ)
|
| 12 |
+
models/*.gguf
|
| 13 |
+
|
| 14 |
+
# IDE
|
| 15 |
+
.vscode/
|
| 16 |
+
.idea/
|
| 17 |
+
*.swp
|
| 18 |
+
*.swo
|
| 19 |
+
|
| 20 |
+
# OS
|
| 21 |
+
.DS_Store
|
| 22 |
+
Thumbs.db
|
| 23 |
+
|
| 24 |
+
# Logs
|
| 25 |
+
*.log
|
DEPLOY.md
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Hugging Face Spaces ๋ฐฐํฌ ๊ฐ์ด๋
|
| 2 |
+
|
| 3 |
+
## 1๋จ๊ณ: Hugging Face ๊ณ์ ์์ฑ
|
| 4 |
+
|
| 5 |
+
1. https://huggingface.co/ ์ ์
|
| 6 |
+
2. ํ์๊ฐ์
(GitHub ๊ณ์ ์ผ๋ก ๊ฐ๋ฅ)
|
| 7 |
+
3. ๋ฌด๋ฃ ๊ณ์ ์ผ๋ก ์ถฉ๋ถ!
|
| 8 |
+
|
| 9 |
+
## 2๋จ๊ณ: Space ์์ฑ
|
| 10 |
+
|
| 11 |
+
1. https://huggingface.co/new-space
|
| 12 |
+
2. ์ค์ :
|
| 13 |
+
- **Space name**: `chatbia-api` (์ํ๋ ์ด๋ฆ)
|
| 14 |
+
- **License**: MIT
|
| 15 |
+
- **Space SDK**: Docker
|
| 16 |
+
- **Space hardware**: CPU basic (๋ฌด๋ฃ)
|
| 17 |
+
- **Visibility**: Public
|
| 18 |
+
|
| 19 |
+
## 3๋จ๊ณ: Git ์ค์
|
| 20 |
+
|
| 21 |
+
### ๋ก์ปฌ์์ Git ์ด๊ธฐํ
|
| 22 |
+
|
| 23 |
+
```bash
|
| 24 |
+
cd ChatBIA-Server
|
| 25 |
+
|
| 26 |
+
# Git ์ด๊ธฐํ
|
| 27 |
+
git init
|
| 28 |
+
git lfs install
|
| 29 |
+
git lfs track "*.gguf"
|
| 30 |
+
|
| 31 |
+
# Hugging Face ๋ฆฌ๋ชจํธ ์ถ๊ฐ
|
| 32 |
+
git remote add origin https://huggingface.co/spaces/YOUR-USERNAME/chatbia-api
|
| 33 |
+
|
| 34 |
+
# ํ์ผ ์ถ๊ฐ
|
| 35 |
+
git add .
|
| 36 |
+
git commit -m "Initial commit"
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
## 4๋จ๊ณ: ๋ชจ๋ธ ํ์ผ ์ถ๊ฐ
|
| 40 |
+
|
| 41 |
+
**์ค์**: ๋ชจ๋ธ ํ์ผ์ Git LFS๋ก ์
๋ก๋ํด์ผ ํฉ๋๋ค!
|
| 42 |
+
|
| 43 |
+
```bash
|
| 44 |
+
# models ํด๋์ GGUF ํ์ผ ๋ณต์ฌ
|
| 45 |
+
cp ../ChatBIA-Windows/models/*.gguf .
|
| 46 |
+
|
| 47 |
+
# Git LFS๋ก ์ถ๊ฐ
|
| 48 |
+
git add Qwen2.5-3B-Instruct-Q4_K_M.gguf
|
| 49 |
+
git add ChatBIA-3B-v0.1-Q4_K_M.gguf
|
| 50 |
+
|
| 51 |
+
git commit -m "Add model files"
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
## 5๋จ๊ณ: Push
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
# Hugging Face ํ ํฐ ์ค์ (์ต์ด 1ํ)
|
| 58 |
+
# https://huggingface.co/settings/tokens ์์ ํ ํฐ ์์ฑ
|
| 59 |
+
|
| 60 |
+
git push origin main
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
**์ฐธ๊ณ **: ๋ชจ๋ธ ํ์ผ์ด ํฌ๋ฏ๋ก ์
๋ก๋์ ์๊ฐ์ด ๊ฑธ๋ฆฝ๋๋ค (~10-20๋ถ)
|
| 64 |
+
|
| 65 |
+
## 6๋จ๊ณ: ๋ฐฐํฌ ํ์ธ
|
| 66 |
+
|
| 67 |
+
1. Space ํ์ด์ง์์ "Building" ์ํ ํ์ธ
|
| 68 |
+
2. ๋น๋ ์๋ฃ ํ "Running" ์ํ ํ์ธ
|
| 69 |
+
3. API ํ
์คํธ:
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
curl https://YOUR-USERNAME-chatbia-api.hf.space/
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
## 7๋จ๊ณ: API URL ํ์ธ
|
| 76 |
+
|
| 77 |
+
๋ฐฐํฌ ์๋ฃ ํ Space URL:
|
| 78 |
+
```
|
| 79 |
+
https://YOUR-USERNAME-chatbia-api.hf.space
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
์ด URL์ ์๋๋ก์ด๋ ์ฑ์์ ์ฌ์ฉํ๋ฉด ๋ฉ๋๋ค!
|
| 83 |
+
|
| 84 |
+
## ๋ฌธ์ ํด๊ฒฐ
|
| 85 |
+
|
| 86 |
+
### ๋น๋ ์คํจ
|
| 87 |
+
- Logs ํญ์์ ์๋ฌ ํ์ธ
|
| 88 |
+
- ์ฃผ๋ก requirements.txt ๋ฌธ์
|
| 89 |
+
|
| 90 |
+
### ๋ชจ๋ธ ๋ก๋ ์คํจ
|
| 91 |
+
- Git LFS๊ฐ ์ ๋๋ก ์ค์ ๋์๋์ง ํ์ธ
|
| 92 |
+
- ๋ชจ๋ธ ํ์ผ์ด ์ ์์ ์ผ๋ก ์
๋ก๋๋์๋์ง ํ์ธ
|
| 93 |
+
|
| 94 |
+
### ๋๋ฆฐ ์๋ต
|
| 95 |
+
- CPU basic์ ๋ฌด๋ฃ์ง๋ง ๋๋ฆฝ๋๋ค
|
| 96 |
+
- ์ ๋ฃ GPU๋ก ์
๊ทธ๋ ์ด๋ ๊ฐ๋ฅ (~$0.60/hour)
|
| 97 |
+
|
| 98 |
+
## ๋์: ๋ก์ปฌ ํ
์คํธ ๋จผ์
|
| 99 |
+
|
| 100 |
+
```bash
|
| 101 |
+
# Docker๋ก ๋ก์ปฌ ํ
์คํธ
|
| 102 |
+
docker build -t chatbia-api .
|
| 103 |
+
docker run -p 7860:7860 chatbia-api
|
| 104 |
+
|
| 105 |
+
# ํ
์คํธ
|
| 106 |
+
curl http://localhost:7860/
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
## ๋น์ฉ
|
| 110 |
+
|
| 111 |
+
- **CPU basic**: ๋ฌด๋ฃ โ
|
| 112 |
+
- **CPU upgrade**: $0.03/hour
|
| 113 |
+
- **GPU T4**: $0.60/hour
|
| 114 |
+
- **GPU A10G**: $3.15/hour
|
| 115 |
+
|
| 116 |
+
๋ฌด๋ฃ CPU๋ก ์์ํ์ธ์!
|
Dockerfile
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM python:3.10-slim
|
| 2 |
+
|
| 3 |
+
WORKDIR /app
|
| 4 |
+
|
| 5 |
+
# ์์คํ
ํจํค์ง ์ค์น
|
| 6 |
+
RUN apt-get update && apt-get install -y \
|
| 7 |
+
build-essential \
|
| 8 |
+
cmake \
|
| 9 |
+
git \
|
| 10 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 11 |
+
|
| 12 |
+
# Python ํจํค์ง ์ค์น
|
| 13 |
+
COPY requirements.txt .
|
| 14 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
| 15 |
+
|
| 16 |
+
# ์ฑ ํ์ผ ๋ณต์ฌ
|
| 17 |
+
COPY app.py .
|
| 18 |
+
|
| 19 |
+
# ๋ชจ๋ธ ํ์ผ์ Git LFS๋ก ์๋ ๋ค์ด๋ก๋๋จ
|
| 20 |
+
# (Hugging Face Spaces๊ฐ ์๋์ผ๋ก ์ฒ๋ฆฌ)
|
| 21 |
+
|
| 22 |
+
# ํฌํธ ๋
ธ์ถ
|
| 23 |
+
EXPOSE 7860
|
| 24 |
+
|
| 25 |
+
# FastAPI ์คํ
|
| 26 |
+
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
|
README.md
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ChatBIA Server
|
| 2 |
+
|
| 3 |
+
24/7 ํ๊ณ AI FastAPI ์๋ฒ
|
| 4 |
+
|
| 5 |
+
## ๋ก์ปฌ ํ
์คํธ
|
| 6 |
+
|
| 7 |
+
```bash
|
| 8 |
+
# ๊ฐ์ํ๊ฒฝ ์์ฑ
|
| 9 |
+
python -m venv venv
|
| 10 |
+
source venv/bin/activate # Windows: venv\Scripts\activate
|
| 11 |
+
|
| 12 |
+
# ํจํค์ง ์ค์น
|
| 13 |
+
pip install -r requirements.txt
|
| 14 |
+
|
| 15 |
+
# ๋ชจ๋ธ ๋ค์ด๋ก๋ (models ํด๋์ GGUF ํ์ผ ๋ฃ๊ธฐ)
|
| 16 |
+
mkdir models
|
| 17 |
+
# Qwen2.5-3B-Instruct-Q4_K_M.gguf
|
| 18 |
+
# ChatBIA-3B-v0.1-Q4_K_M.gguf
|
| 19 |
+
|
| 20 |
+
# ์๋ฒ ์คํ
|
| 21 |
+
python main.py
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
์๋ฒ ์ฃผ์: http://localhost:8000
|
| 25 |
+
|
| 26 |
+
## API ์ฌ์ฉ๋ฒ
|
| 27 |
+
|
| 28 |
+
### 1. ํฌ์ค ์ฒดํฌ
|
| 29 |
+
```bash
|
| 30 |
+
curl http://localhost:8000/
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
### 2. ์ฑํ
|
| 34 |
+
```bash
|
| 35 |
+
curl -X POST http://localhost:8000/chat \
|
| 36 |
+
-H "Content-Type: application/json" \
|
| 37 |
+
-d '{
|
| 38 |
+
"message": "์๋
ํ์ธ์",
|
| 39 |
+
"mode": "bsl",
|
| 40 |
+
"max_tokens": 1024,
|
| 41 |
+
"temperature": 0.7
|
| 42 |
+
}'
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
### 3. ๋ชจ๋ธ ์ํ ํ์ธ
|
| 46 |
+
```bash
|
| 47 |
+
curl http://localhost:8000/models
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## Oracle Cloud ๋ฐฐํฌ
|
| 51 |
+
|
| 52 |
+
### 1. VM ์์ฑ
|
| 53 |
+
- Always Free: VM.Standard.E2.1.Micro (1 OCPU, 1GB RAM)
|
| 54 |
+
- ๋๋: VM.Standard.A1.Flex (4 OCPU, 24GB RAM) - ARM
|
| 55 |
+
|
| 56 |
+
### 2. ๋ฐฉํ๋ฒฝ ์ค์
|
| 57 |
+
```bash
|
| 58 |
+
# Ubuntu ๋ฐฉํ๋ฒฝ
|
| 59 |
+
sudo ufw allow 8000/tcp
|
| 60 |
+
sudo ufw enable
|
| 61 |
+
|
| 62 |
+
# Oracle Cloud ๋ณด์ ๋ชฉ๋ก
|
| 63 |
+
# Ingress Rule: 0.0.0.0/0, TCP, Port 8000
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### 3. ์๋ฒ ์ค์น
|
| 67 |
+
```bash
|
| 68 |
+
# ํ๋ก์ ํธ ๋ณต์ฌ
|
| 69 |
+
scp -r ChatBIA-Server ubuntu@<IP>:~/
|
| 70 |
+
|
| 71 |
+
# SSH ์ ์
|
| 72 |
+
ssh ubuntu@<IP>
|
| 73 |
+
|
| 74 |
+
# Python ์ค์น
|
| 75 |
+
sudo apt update
|
| 76 |
+
sudo apt install python3.10 python3-pip -y
|
| 77 |
+
|
| 78 |
+
# ํจํค์ง ์ค์น
|
| 79 |
+
cd ChatBIA-Server
|
| 80 |
+
pip3 install -r requirements.txt
|
| 81 |
+
|
| 82 |
+
# ๋ชจ๋ธ ์
๋ก๋ (scp ์ฌ์ฉ)
|
| 83 |
+
# ๋ก์ปฌ์์:
|
| 84 |
+
scp models/*.gguf ubuntu@<IP>:~/ChatBIA-Server/models/
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
### 4. Systemd ์๋น์ค ์์ฑ
|
| 88 |
+
```bash
|
| 89 |
+
sudo nano /etc/systemd/system/chatbia.service
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
```ini
|
| 93 |
+
[Unit]
|
| 94 |
+
Description=ChatBIA FastAPI Server
|
| 95 |
+
After=network.target
|
| 96 |
+
|
| 97 |
+
[Service]
|
| 98 |
+
Type=simple
|
| 99 |
+
User=ubuntu
|
| 100 |
+
WorkingDirectory=/home/ubuntu/ChatBIA-Server
|
| 101 |
+
ExecStart=/usr/bin/python3 main.py
|
| 102 |
+
Restart=always
|
| 103 |
+
RestartSec=10
|
| 104 |
+
|
| 105 |
+
[Install]
|
| 106 |
+
WantedBy=multi-user.target
|
| 107 |
+
```
|
| 108 |
+
|
| 109 |
+
```bash
|
| 110 |
+
# ์๋น์ค ์์
|
| 111 |
+
sudo systemctl daemon-reload
|
| 112 |
+
sudo systemctl enable chatbia
|
| 113 |
+
sudo systemctl start chatbia
|
| 114 |
+
|
| 115 |
+
# ์ํ ํ์ธ
|
| 116 |
+
sudo systemctl status chatbia
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
## ์ฌ์ฉ๋
|
| 120 |
+
|
| 121 |
+
- CPU: ~80% (์ถ๋ก ์)
|
| 122 |
+
- RAM: ~1.5GB (๋ชจ๋ธ 2๊ฐ ๋ก๋ ์)
|
| 123 |
+
- ๋์คํฌ: ~4GB (๋ชจ๋ธ ํ์ผ)
|
| 124 |
+
|
| 125 |
+
## ๋ณด์
|
| 126 |
+
|
| 127 |
+
- API ํค ์ธ์ฆ ์ถ๊ฐ ๊ถ์ฅ
|
| 128 |
+
- HTTPS (Let's Encrypt) ์ค์
|
| 129 |
+
- Rate limiting ๊ณ ๋ ค
|
README_HF.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: ChatBIA API
|
| 3 |
+
emoji: ๐ผ
|
| 4 |
+
colorFrom: purple
|
| 5 |
+
colorTo: pink
|
| 6 |
+
sdk: docker
|
| 7 |
+
pinned: false
|
| 8 |
+
license: mit
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# ChatBIA API Server
|
| 12 |
+
|
| 13 |
+
24/7 ํ๊ณ ์ ๋ฌธ AI ์๋ฒ
|
| 14 |
+
|
| 15 |
+
## Features
|
| 16 |
+
|
| 17 |
+
- ๐ค ๋์ผ AI ๋ชจ๋ธ (General + BSL)
|
| 18 |
+
- ๐ผ ํ๊ณ ๊ณ์ฐ ์ ๋ฌธ
|
| 19 |
+
- ๐ RESTful API
|
| 20 |
+
- ๐ CORS ์ง์
|
| 21 |
+
|
| 22 |
+
## API ์ฌ์ฉ๋ฒ
|
| 23 |
+
|
| 24 |
+
### 1. ํฌ์ค ์ฒดํฌ
|
| 25 |
+
```bash
|
| 26 |
+
curl https://YOUR-SPACE-NAME.hf.space/
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
### 2. ์ฑํ
|
| 30 |
+
```bash
|
| 31 |
+
curl -X POST https://YOUR-SPACE-NAME.hf.space/chat \
|
| 32 |
+
-H "Content-Type: application/json" \
|
| 33 |
+
-d '{
|
| 34 |
+
"message": "5์ฒ๋ง์ ์ค๋น๋ฅผ 5๋
๊ฐ ๊ฐ๊ฐ์๊ฐ ๊ณ์ฐํด์ค",
|
| 35 |
+
"mode": "bsl",
|
| 36 |
+
"max_tokens": 1024,
|
| 37 |
+
"temperature": 0.7
|
| 38 |
+
}'
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
### 3. ๋ชจ๋ธ ์ํ
|
| 42 |
+
```bash
|
| 43 |
+
curl https://YOUR-SPACE-NAME.hf.space/models
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
## Models
|
| 47 |
+
|
| 48 |
+
- **General**: Qwen2.5-3B-Instruct-Q4_K_M
|
| 49 |
+
- **BSL**: ChatBIA-3B-v0.1-Q4_K_M
|
| 50 |
+
|
| 51 |
+
## License
|
| 52 |
+
|
| 53 |
+
MIT
|
app.py
ADDED
|
@@ -0,0 +1,200 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
ChatBIA Hugging Face Spaces API
|
| 3 |
+
24/7 ํ๊ณ AI ์๋ฒ
|
| 4 |
+
"""
|
| 5 |
+
from fastapi import FastAPI, HTTPException
|
| 6 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 7 |
+
from pydantic import BaseModel
|
| 8 |
+
from typing import Optional
|
| 9 |
+
import os
|
| 10 |
+
from llama_cpp import Llama
|
| 11 |
+
from huggingface_hub import hf_hub_download
|
| 12 |
+
|
| 13 |
+
app = FastAPI(
|
| 14 |
+
title="ChatBIA API",
|
| 15 |
+
description="ํ๊ณ ์ ๋ฌธ AI ์๋ฒ (Hugging Face Spaces)",
|
| 16 |
+
version="1.0.0"
|
| 17 |
+
)
|
| 18 |
+
|
| 19 |
+
# CORS ์ค์
|
| 20 |
+
app.add_middleware(
|
| 21 |
+
CORSMiddleware,
|
| 22 |
+
allow_origins=["*"],
|
| 23 |
+
allow_credentials=True,
|
| 24 |
+
allow_methods=["*"],
|
| 25 |
+
allow_headers=["*"],
|
| 26 |
+
)
|
| 27 |
+
|
| 28 |
+
# Hugging Face ๋ชจ๋ธ ๋ ํฌ์งํฐ๋ฆฌ
|
| 29 |
+
GENERAL_MODEL_REPO = "Qwen/Qwen2.5-3B-Instruct-GGUF"
|
| 30 |
+
GENERAL_MODEL_FILE = "qwen2.5-3b-instruct-q4_k_m.gguf"
|
| 31 |
+
BSL_MODEL_REPO = "Seounghyup/ChatBIA-3B-v0.1"
|
| 32 |
+
BSL_MODEL_FILE = "ChatBIA-3B-v0.1-Q4_K_M.gguf"
|
| 33 |
+
|
| 34 |
+
# ์ ์ญ ๋ชจ๋ธ ๋ณ์
|
| 35 |
+
general_model = None
|
| 36 |
+
bsl_model = None
|
| 37 |
+
general_model_path = None
|
| 38 |
+
bsl_model_path = None
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
class ChatRequest(BaseModel):
|
| 42 |
+
message: str
|
| 43 |
+
mode: str = "bsl" # "general" or "bsl"
|
| 44 |
+
max_tokens: int = 1024
|
| 45 |
+
temperature: float = 0.7
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
class ChatResponse(BaseModel):
|
| 49 |
+
response: str
|
| 50 |
+
mode: str
|
| 51 |
+
tokens: int
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
@app.on_event("startup")
|
| 55 |
+
async def load_models():
|
| 56 |
+
"""์๋ฒ ์์ ์ ๋ชจ๋ธ ๋ค์ด๋ก๋ ๋ฐ ๋ก๋"""
|
| 57 |
+
global general_model, bsl_model, general_model_path, bsl_model_path
|
| 58 |
+
|
| 59 |
+
# General ๋ชจ๋ธ ๋ค์ด๋ก๋
|
| 60 |
+
print(f"๐ ์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ค์ด๋ก๋ ์ค: {GENERAL_MODEL_REPO}/{GENERAL_MODEL_FILE}")
|
| 61 |
+
try:
|
| 62 |
+
general_model_path = hf_hub_download(
|
| 63 |
+
repo_id=GENERAL_MODEL_REPO,
|
| 64 |
+
filename=GENERAL_MODEL_FILE,
|
| 65 |
+
repo_type="model"
|
| 66 |
+
)
|
| 67 |
+
print(f"โ
์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ค์ด๋ก๋ ์๋ฃ: {general_model_path}")
|
| 68 |
+
|
| 69 |
+
# ๋ชจ๋ธ ๋ก๋
|
| 70 |
+
general_model = Llama(
|
| 71 |
+
model_path=general_model_path,
|
| 72 |
+
n_ctx=2048,
|
| 73 |
+
n_threads=2, # Spaces CPU ์ ํ
|
| 74 |
+
n_gpu_layers=0,
|
| 75 |
+
verbose=False
|
| 76 |
+
)
|
| 77 |
+
print("โ
์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์๋ฃ")
|
| 78 |
+
except Exception as e:
|
| 79 |
+
print(f"โ ์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์คํจ: {e}")
|
| 80 |
+
|
| 81 |
+
# BSL ๋ชจ๋ธ ๋ค์ด๋ก๋
|
| 82 |
+
print(f"๐ BSL ๋ชจ๋ ๋ชจ๋ธ ๋ค์ด๋ก๋ ์ค: {BSL_MODEL_REPO}/{BSL_MODEL_FILE}")
|
| 83 |
+
try:
|
| 84 |
+
bsl_model_path = hf_hub_download(
|
| 85 |
+
repo_id=BSL_MODEL_REPO,
|
| 86 |
+
filename=BSL_MODEL_FILE,
|
| 87 |
+
repo_type="model"
|
| 88 |
+
)
|
| 89 |
+
print(f"โ
BSL ๋ชจ๋ ๋ชจ๋ธ ๋ค์ด๋ก๋ ์๋ฃ: {bsl_model_path}")
|
| 90 |
+
|
| 91 |
+
# ๋ชจ๋ธ ๋ก๋
|
| 92 |
+
bsl_model = Llama(
|
| 93 |
+
model_path=bsl_model_path,
|
| 94 |
+
n_ctx=2048,
|
| 95 |
+
n_threads=2,
|
| 96 |
+
n_gpu_layers=0,
|
| 97 |
+
verbose=False
|
| 98 |
+
)
|
| 99 |
+
print("โ
BSL ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์๋ฃ")
|
| 100 |
+
except Exception as e:
|
| 101 |
+
print(f"โ BSL ๋ชจ๋ธ ๋ก๋ ์คํจ: {e}")
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
def build_prompt(message: str, mode: str) -> str:
|
| 105 |
+
"""ํ๋กฌํํธ ๋น๋"""
|
| 106 |
+
if mode == "bsl":
|
| 107 |
+
return f"""<|im_start|>system
|
| 108 |
+
You are a professional accounting AI assistant. Respond naturally in Korean.
|
| 109 |
+
|
| 110 |
+
Important: Only generate BSL DSL code when the user explicitly requests calculations (e.g., "๊ณ์ฐํด์ค", "์ฝ๋ ์์ฑํด์ค", "BSL๋ก ์์ฑํด์ค"). For general questions or greetings, respond conversationally without code.<|im_end|>
|
| 111 |
+
<|im_start|>user
|
| 112 |
+
{message}<|im_end|>
|
| 113 |
+
<|im_start|>assistant
|
| 114 |
+
"""
|
| 115 |
+
else:
|
| 116 |
+
return f"""<|im_start|>system
|
| 117 |
+
You are a helpful AI assistant. Respond naturally in Korean.<|im_end|>
|
| 118 |
+
<|im_start|>user
|
| 119 |
+
{message}<|im_end|>
|
| 120 |
+
<|im_start|>assistant
|
| 121 |
+
"""
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
@app.get("/")
|
| 125 |
+
async def root():
|
| 126 |
+
"""ํฌ์ค ์ฒดํฌ"""
|
| 127 |
+
return {
|
| 128 |
+
"status": "online",
|
| 129 |
+
"service": "ChatBIA API",
|
| 130 |
+
"version": "1.0.0",
|
| 131 |
+
"platform": "Hugging Face Spaces",
|
| 132 |
+
"models": {
|
| 133 |
+
"general": general_model is not None,
|
| 134 |
+
"bsl": bsl_model is not None
|
| 135 |
+
}
|
| 136 |
+
}
|
| 137 |
+
|
| 138 |
+
|
| 139 |
+
@app.post("/chat", response_model=ChatResponse)
|
| 140 |
+
async def chat(request: ChatRequest):
|
| 141 |
+
"""์ฑํ
์๋ํฌ์ธํธ"""
|
| 142 |
+
# ๋ชจ๋ธ ์ ํ
|
| 143 |
+
if request.mode == "general":
|
| 144 |
+
model = general_model
|
| 145 |
+
model_name = "General"
|
| 146 |
+
else:
|
| 147 |
+
model = bsl_model
|
| 148 |
+
model_name = "BSL"
|
| 149 |
+
|
| 150 |
+
# ๋ชจ๋ธ์ด ์์ผ๋ฉด ์๋ฌ
|
| 151 |
+
if model is None:
|
| 152 |
+
raise HTTPException(
|
| 153 |
+
status_code=503,
|
| 154 |
+
detail=f"{model_name} ๋ชจ๋ธ์ด ๋ก๋๋์ง ์์์ต๋๋ค."
|
| 155 |
+
)
|
| 156 |
+
|
| 157 |
+
try:
|
| 158 |
+
# ํ๋กฌํํธ ๋น๋
|
| 159 |
+
prompt = build_prompt(request.message, request.mode)
|
| 160 |
+
|
| 161 |
+
# ์ถ๋ก
|
| 162 |
+
response = model(
|
| 163 |
+
prompt,
|
| 164 |
+
max_tokens=request.max_tokens,
|
| 165 |
+
temperature=request.temperature,
|
| 166 |
+
top_p=0.9,
|
| 167 |
+
top_k=40,
|
| 168 |
+
repeat_penalty=1.1,
|
| 169 |
+
stop=["<|im_end|>", "###", "\n\n\n"]
|
| 170 |
+
)
|
| 171 |
+
|
| 172 |
+
text = response["choices"][0]["text"].strip()
|
| 173 |
+
tokens = len(response["choices"][0]["text"].split())
|
| 174 |
+
|
| 175 |
+
return ChatResponse(
|
| 176 |
+
response=text,
|
| 177 |
+
mode=request.mode,
|
| 178 |
+
tokens=tokens
|
| 179 |
+
)
|
| 180 |
+
|
| 181 |
+
except Exception as e:
|
| 182 |
+
raise HTTPException(
|
| 183 |
+
status_code=500,
|
| 184 |
+
detail=f"AI ๋ชจ๋ธ ์ฒ๋ฆฌ ์ค ์ค๋ฅ: {str(e)}"
|
| 185 |
+
)
|
| 186 |
+
|
| 187 |
+
|
| 188 |
+
@app.get("/models")
|
| 189 |
+
async def get_models():
|
| 190 |
+
"""์ฌ์ฉ ๊ฐ๋ฅํ ๋ชจ๋ธ ๋ชฉ๋ก"""
|
| 191 |
+
return {
|
| 192 |
+
"general": {
|
| 193 |
+
"loaded": general_model is not None,
|
| 194 |
+
"path": general_model_path
|
| 195 |
+
},
|
| 196 |
+
"bsl": {
|
| 197 |
+
"loaded": bsl_model is not None,
|
| 198 |
+
"path": bsl_model_path
|
| 199 |
+
}
|
| 200 |
+
}
|
main.py
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
ChatBIA FastAPI Server
|
| 3 |
+
24/7 ํ๊ณ AI ์๋ฒ
|
| 4 |
+
"""
|
| 5 |
+
from fastapi import FastAPI, HTTPException
|
| 6 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 7 |
+
from pydantic import BaseModel
|
| 8 |
+
from typing import Optional, List
|
| 9 |
+
import os
|
| 10 |
+
from llama_cpp import Llama
|
| 11 |
+
|
| 12 |
+
app = FastAPI(
|
| 13 |
+
title="ChatBIA API",
|
| 14 |
+
description="ํ๊ณ ์ ๋ฌธ AI ์๋ฒ",
|
| 15 |
+
version="1.0.0"
|
| 16 |
+
)
|
| 17 |
+
|
| 18 |
+
# CORS ์ค์ (์๋๋ก์ด๋/์น์์ ์ ๊ทผ ๊ฐ๋ฅ)
|
| 19 |
+
app.add_middleware(
|
| 20 |
+
CORSMiddleware,
|
| 21 |
+
allow_origins=["*"],
|
| 22 |
+
allow_credentials=True,
|
| 23 |
+
allow_methods=["*"],
|
| 24 |
+
allow_headers=["*"],
|
| 25 |
+
)
|
| 26 |
+
|
| 27 |
+
# ๋ชจ๋ธ ๊ฒฝ๋ก
|
| 28 |
+
MODEL_DIR = "models"
|
| 29 |
+
GENERAL_MODEL_PATH = os.path.join(MODEL_DIR, "Qwen2.5-3B-Instruct-Q4_K_M.gguf")
|
| 30 |
+
BSL_MODEL_PATH = os.path.join(MODEL_DIR, "ChatBIA-3B-v0.1-Q4_K_M.gguf")
|
| 31 |
+
|
| 32 |
+
# ์ ์ญ ๋ชจ๋ธ ๋ณ์
|
| 33 |
+
general_model = None
|
| 34 |
+
bsl_model = None
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
class ChatRequest(BaseModel):
|
| 38 |
+
message: str
|
| 39 |
+
mode: str = "bsl" # "general" or "bsl"
|
| 40 |
+
max_tokens: int = 1024
|
| 41 |
+
temperature: float = 0.7
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
class ChatResponse(BaseModel):
|
| 45 |
+
response: str
|
| 46 |
+
mode: str
|
| 47 |
+
tokens: int
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
@app.on_event("startup")
|
| 51 |
+
async def load_models():
|
| 52 |
+
"""์๋ฒ ์์ ์ ๋ชจ๋ธ ๋ก๋"""
|
| 53 |
+
global general_model, bsl_model
|
| 54 |
+
|
| 55 |
+
os.makedirs(MODEL_DIR, exist_ok=True)
|
| 56 |
+
|
| 57 |
+
# General ๋ชจ๋ธ ๋ก๋
|
| 58 |
+
if os.path.exists(GENERAL_MODEL_PATH):
|
| 59 |
+
print(f"๐ ์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์ค: {GENERAL_MODEL_PATH}")
|
| 60 |
+
try:
|
| 61 |
+
general_model = Llama(
|
| 62 |
+
model_path=GENERAL_MODEL_PATH,
|
| 63 |
+
n_ctx=2048,
|
| 64 |
+
n_threads=4,
|
| 65 |
+
n_gpu_layers=0, # Oracle Cloud๋ CPU
|
| 66 |
+
verbose=False
|
| 67 |
+
)
|
| 68 |
+
print("โ
์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์๋ฃ")
|
| 69 |
+
except Exception as e:
|
| 70 |
+
print(f"โ ์ผ๋ฐ ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์คํจ: {e}")
|
| 71 |
+
|
| 72 |
+
# BSL ๋ชจ๋ธ ๋ก๋
|
| 73 |
+
if os.path.exists(BSL_MODEL_PATH):
|
| 74 |
+
print(f"๐ BSL ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์ค: {BSL_MODEL_PATH}")
|
| 75 |
+
try:
|
| 76 |
+
bsl_model = Llama(
|
| 77 |
+
model_path=BSL_MODEL_PATH,
|
| 78 |
+
n_ctx=2048,
|
| 79 |
+
n_threads=4,
|
| 80 |
+
n_gpu_layers=0,
|
| 81 |
+
verbose=False
|
| 82 |
+
)
|
| 83 |
+
print("โ
BSL ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์๋ฃ")
|
| 84 |
+
except Exception as e:
|
| 85 |
+
print(f"โ BSL ๋ชจ๋ ๋ชจ๋ธ ๋ก๋ ์คํจ: {e}")
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def build_prompt(message: str, mode: str) -> str:
|
| 89 |
+
"""ํ๋กฌํํธ ๋น๋"""
|
| 90 |
+
if mode == "bsl":
|
| 91 |
+
return f"""<|im_start|>system
|
| 92 |
+
You are a professional accounting AI assistant. Respond naturally in Korean.
|
| 93 |
+
|
| 94 |
+
Important: Only generate BSL DSL code when the user explicitly requests calculations (e.g., "๊ณ์ฐํด์ค", "์ฝ๋ ์์ฑํด์ค", "BSL๋ก ์์ฑํด์ค"). For general questions or greetings, respond conversationally without code.<|im_end|>
|
| 95 |
+
<|im_start|>user
|
| 96 |
+
{message}<|im_end|>
|
| 97 |
+
<|im_start|>assistant
|
| 98 |
+
"""
|
| 99 |
+
else:
|
| 100 |
+
return f"""<|im_start|>system
|
| 101 |
+
You are a helpful AI assistant. Respond naturally in Korean.<|im_end|>
|
| 102 |
+
<|im_start|>user
|
| 103 |
+
{message}<|im_end|>
|
| 104 |
+
<|im_start|>assistant
|
| 105 |
+
"""
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
@app.get("/")
|
| 109 |
+
async def root():
|
| 110 |
+
"""ํฌ์ค ์ฒดํฌ"""
|
| 111 |
+
return {
|
| 112 |
+
"status": "online",
|
| 113 |
+
"service": "ChatBIA API",
|
| 114 |
+
"version": "1.0.0",
|
| 115 |
+
"models": {
|
| 116 |
+
"general": general_model is not None,
|
| 117 |
+
"bsl": bsl_model is not None
|
| 118 |
+
}
|
| 119 |
+
}
|
| 120 |
+
|
| 121 |
+
|
| 122 |
+
@app.post("/chat", response_model=ChatResponse)
|
| 123 |
+
async def chat(request: ChatRequest):
|
| 124 |
+
"""์ฑํ
์๋ํฌ์ธํธ"""
|
| 125 |
+
# ๋ชจ๋ธ ์ ํ
|
| 126 |
+
if request.mode == "general":
|
| 127 |
+
model = general_model
|
| 128 |
+
model_name = "General"
|
| 129 |
+
else:
|
| 130 |
+
model = bsl_model
|
| 131 |
+
model_name = "BSL"
|
| 132 |
+
|
| 133 |
+
# ๋ชจ๋ธ์ด ์์ผ๋ฉด ์๋ฌ
|
| 134 |
+
if model is None:
|
| 135 |
+
raise HTTPException(
|
| 136 |
+
status_code=503,
|
| 137 |
+
detail=f"{model_name} ๋ชจ๋ธ์ด ๋ก๋๋์ง ์์์ต๋๋ค."
|
| 138 |
+
)
|
| 139 |
+
|
| 140 |
+
try:
|
| 141 |
+
# ํ๋กฌํํธ ๋น๋
|
| 142 |
+
prompt = build_prompt(request.message, request.mode)
|
| 143 |
+
|
| 144 |
+
# ์ถ๋ก
|
| 145 |
+
response = model(
|
| 146 |
+
prompt,
|
| 147 |
+
max_tokens=request.max_tokens,
|
| 148 |
+
temperature=request.temperature,
|
| 149 |
+
top_p=0.9,
|
| 150 |
+
top_k=40,
|
| 151 |
+
repeat_penalty=1.1,
|
| 152 |
+
stop=["<|im_end|>", "###", "\n\n\n"]
|
| 153 |
+
)
|
| 154 |
+
|
| 155 |
+
text = response["choices"][0]["text"].strip()
|
| 156 |
+
tokens = len(response["choices"][0]["text"].split())
|
| 157 |
+
|
| 158 |
+
return ChatResponse(
|
| 159 |
+
response=text,
|
| 160 |
+
mode=request.mode,
|
| 161 |
+
tokens=tokens
|
| 162 |
+
)
|
| 163 |
+
|
| 164 |
+
except Exception as e:
|
| 165 |
+
raise HTTPException(
|
| 166 |
+
status_code=500,
|
| 167 |
+
detail=f"AI ๋ชจ๋ธ ์ฒ๋ฆฌ ์ค ์ค๋ฅ: {str(e)}"
|
| 168 |
+
)
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
@app.get("/models")
|
| 172 |
+
async def get_models():
|
| 173 |
+
"""์ฌ์ฉ ๊ฐ๋ฅํ ๋ชจ๋ธ ๋ชฉ๋ก"""
|
| 174 |
+
return {
|
| 175 |
+
"general": {
|
| 176 |
+
"loaded": general_model is not None,
|
| 177 |
+
"path": GENERAL_MODEL_PATH if os.path.exists(GENERAL_MODEL_PATH) else None
|
| 178 |
+
},
|
| 179 |
+
"bsl": {
|
| 180 |
+
"loaded": bsl_model is not None,
|
| 181 |
+
"path": BSL_MODEL_PATH if os.path.exists(BSL_MODEL_PATH) else None
|
| 182 |
+
}
|
| 183 |
+
}
|
| 184 |
+
|
| 185 |
+
|
| 186 |
+
if __name__ == "__main__":
|
| 187 |
+
import uvicorn
|
| 188 |
+
uvicorn.run(
|
| 189 |
+
"main:app",
|
| 190 |
+
host="0.0.0.0",
|
| 191 |
+
port=8000,
|
| 192 |
+
reload=False # ํ๋ก๋์
์์๋ reload=False
|
| 193 |
+
)
|
requirements.txt
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
fastapi==0.115.0
|
| 2 |
+
uvicorn[standard]==0.30.6
|
| 3 |
+
pydantic==2.9.2
|
| 4 |
+
llama-cpp-python==0.2.90
|
| 5 |
+
python-multipart==0.0.9
|