Phil
phil111
AI & ML interests
None yet
Recent Activity
new activity 1 day ago
Zyphra/ZAYA1-8B:What on earth did you guys do new activity 8 days ago
deepseek-ai/DeepSeek-V4-Flash:Too big to run locally. new activity 9 days ago
deepseek-ai/DeepSeek-V4-Pro:16 - 24B models with FP8 quantizationOrganizations
None yet
What on earth did you guys do
π₯ 6
16
#6 opened 4 days ago
by
amarosnithe
Too big to run locally.
π€―π 9
19
#12 opened 19 days ago
by
Dampfinchen
16 - 24B models with FP8 quantization
π 4
6
#152 opened 17 days ago
by
Duonglv
Is it really better in real world task?
π 1
9
#50 opened 20 days ago
by
BornSaint
Why this release?
ππ€― 8
4
#3 opened 20 days ago
by
neoOpus
This LLM is a test maxer, not a general purpose AI model.
π 1
21
#17 opened 29 days ago
by
phil111
IQ4_XS vs. Q5_K_P
11
#15 opened about 1 month ago
by
Data-vanOrtus
I find these models interesting, but have a couple thoughts.
1
#1 opened 26 days ago
by
phil111
Your 260k dictionary is breaking Gemma 4's back.
8
#25 opened 29 days ago
by
phil111
Gemma 4 E4B will be as encyclopedically well-read as the 12b model?
3
#48 opened about 1 month ago
by
Regrin
Impressive skills for its size, but this doesn't belong on normal peoples' phones.
#10 opened about 1 month ago
by
phil111
Gemma 4, Compared to G3, Has Notable Improvements And Regressions
5
#10 opened about 1 month ago
by
phil111
Good Work. Please consider addressing factual hallucinations.
π 1
3
#6 opened about 1 month ago
by
phil111
Thanks. This is by far the best denial stripping I've ever seen.
π 3
2
#30 opened about 1 month ago
by
phil111
More reserved settings and standard Q4_K_M still works best for me.
10
#43 opened about 2 months ago
by
phil111
There's got to be a better way.
23
#6 opened about 2 months ago
by
phil111
There's got to be a better way.
23
#6 opened about 2 months ago
by
phil111