ViewTube

ViewTube
Sign inSign upSubscriptions
Filters

Upload date

Type

Duration

Sort by

Features

Reset

2,111 results

The Art Of The Terminal
Use Local LLMs Already!

LLM providers are not safe for you ‼️ There are 4 reasons to switch 100% local I'm sharing my experience using local GenAI ...

56:31
Use Local LLMs Already!

99,091 views

2 months ago

IBM Technology
What is Ollama? Running Local LLMs Made Simple

Ready to become a certified watsonx AI Assistant Engineer? Register now and use code IBMTechYT20 for 20% off of your exam ...

7:14
What is Ollama? Running Local LLMs Made Simple

220,862 views

11 months ago

Alex Ziskind
THIS is the REAL DEAL 🤯 for local LLMs

This is the stack that gets me over 4000 tokens per second locally. Download Docker Desktop here: https://dockr.ly/4mOdGMO to ...

11:03
THIS is the REAL DEAL 🤯 for local LLMs

496,310 views

6 months ago

Dave's Garage
Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

Dave tests llama3.1 and llama3.2 using Ollama on a Raspberry Pi, a Herk Orion Mini PC, a 3970X, an M2 Mac Pro, and a ...

15:05
Run Local LLMs on Hardware from $50 to $50,000 - We Test and Compare!

369,606 views

1 year ago

Zen van Riel
The Ultimate Local AI Coding Guide For 2026

Get my FREE local AI projects: https://zenvanriel.com/open-source ⚡ Master AI and become a high-paid AI Engineer: ...

36:03
The Ultimate Local AI Coding Guide For 2026

210,068 views

4 months ago

Alex Ziskind
Your local LLM is 10x slower than it should be

Here's the one change that took mine from ~120 tok/s to 1200+ without a new GPU. TryHackMe just launched Cyber Security 101 ...

11:02
Your local LLM is 10x slower than it should be

112,961 views

1 month ago

GosuCoder
Can a Local LLM REALLY be your daily coder? Framework Desktop with GLM 4.5 Air and Qwen 3 Coder

With the arrival of my new Framework Desktop I decided to move to coding just with Local LLM's without touching any Claude, ...

17:43
Can a Local LLM REALLY be your daily coder? Framework Desktop with GLM 4.5 Air and Qwen 3 Coder

54,106 views

6 months ago

NetworkChuck
host ALL your AI locally

This video was originally sponsored by ITProTV. We've since launched NetworkChuck Academy, our own place to learn IT: ...

24:20
host ALL your AI locally

2,446,189 views

1 year ago

Alex Ziskind
FREE Local LLMs on Apple Silicon | FAST!

Step by step setup guide for a totally local LLM with a ChatGPT-like UI, backend and frontend, and a Docker option.

15:09
FREE Local LLMs on Apple Silicon | FAST!

310,779 views

1 year ago

BlueSpork
I tested 17 Uncensored Local LLMs

An evaluation of 17 Q4 quantized uncensored local models, ranging from 8B to 32B, tested against seven different prompts and ...

6:01
I tested 17 Uncensored Local LLMs

48,153 views

1 month ago

Alex Ziskind
Local AI just leveled up... Llama.cpp vs Ollama

Llama.cpp Web UI + GGUF Setup Walkthrough and Ollama comparisons. Check out ChatLLM: https://chatllm.abacus.ai/ltf My ...

14:41
Local AI just leveled up... Llama.cpp vs Ollama

203,696 views

3 months ago

Samuel Gregory
OpenClaw with Local LLM

Fancy running Moltbot with local LLMs? This video guides you through setting it up with Olama, covering model selection, context ...

7:42
OpenClaw with Local LLM

38,796 views

1 month ago

David Ondrej
If you don’t run AI locally you’re falling behind…

Learn about the best AI Business models here - https://www.youtube.com/watch?v=Ta5g-OxjPO4 Wanna start a business with AI ...

28:17
If you don’t run AI locally you’re falling behind…

189,638 views

4 months ago

Alex Ziskind
Windows Handles Local LLMs… Before Linux Destroys It

I ran the same models on Windows, WSL, and full Linux—and the winner wasn't even close.

13:05
Windows Handles Local LLMs… Before Linux Destroys It

84,686 views

9 months ago

0xSero
Local LLMs Hardware - Apple vs Nvidia

A full breakdown: https://x.com/0xSero/status/1995816476276662431?s=20.

33:04
Local LLMs Hardware - Apple vs Nvidia

15,573 views

1 month ago

Tech With Tim
How to Run LLMs Locally - Full Guide

Click this link https://boot.dev/?promo=TECHWITHTIM and use my code TECHWITHTIM to get 25% off your first payment for ...

16:07
How to Run LLMs Locally - Full Guide

77,028 views

2 months ago

Cole Medin
The HARD Truth About Hosting Your Own LLMs

Hosting your own LLMs like Llama 3.1 requires INSANELY good hardware - often times making running your own LLMs ...

14:43
The HARD Truth About Hosting Your Own LLMs

58,119 views

1 year ago

SuccessPursuitZone
How to run uncensored AI locally | dolphin 3 LLM Ollama

ai #dolphin #uncensoredai #llm How to run uncensored AI locally | dolphin 3 LLM Ollama In this video, I show you how to ...

2:59
How to run uncensored AI locally | dolphin 3 LLM Ollama

62,421 views

3 months ago