Xah Talk Show 2025-04-27 Ep648 AI Bot LLM Tech, Deepseek and Tea

chatbot on herbal tea

xah talk show 2025-04-27 tea emacs gptel ollama deepseek 30f62
xah talk show 2025-04-27 tea emacs gptel ollama deepseek 30f62
xah talk show 2025-04-27 herbal tea 317e4
xah talk show 2025-04-27 herbal tea 317e4

ai chatbot llm neural network tech

xah talk show 2025-04-27 ollama 3179d
xah talk show 2025-04-27 ollama 3179d
xah talk show 2025-04-27 ollama command 31867
xah talk show 2025-04-27 ollama command 31867
xah talk show 2025-04-27 llm quantized 318b6
xah talk show 2025-04-27 llm quantized 318b6
xah talk show 2025-04-27 grok deepseek thinking 318fc
xah talk show 2025-04-27 grok deepseek thinking 318fc
xah talk show 2025-04-27 2025-04-27 deepseek r1 31729
https://youtu.be/0VLAoVGf_74

asking grok

  • what does quantized mean in llm
  • how to turn off thinking in deepseek
  • what is 332836308 * 746818467768253

answer

xah talk show 2025-04-27 grok multiply 36545
xah talk show 2025-04-27 grok multiply 36545
print( 332836308 * 746818467768253)
# 248568301558202328129924

# grok result
# incorrect
# what is 332836308 * 746818467768253
# 248573315760413104614300474
332836308 * 746818467768253
(* 248568301558202328129924 *)

emacs vs vscode

xah talk show 2025-04-27 emacs vs vscode 314cf
xah talk show 2025-04-27 emacs vs vscode 314cf
xah talk show 2025-04-27 vscode 3146a
xah talk show 2025-04-27 vscode 3146a

ollama, deepseek