Project
TritonTube – Distributed Video Hosting System
UC San Diego · Go · Distributed Systems
TritonTube is a distributed video hosting platform that simulates how large-scale systems like YouTube manage and stream content across multiple nodes. Using consistent hashing, video chunks are evenly distributed across storage servers so that when nodes are added or removed, only a small portion of data needs to move.
Built as a solo project for CSE 124: Networked Services at UC San Diego. Due to academic integrity policy, the codebase isn’t public, but you can request temporary access for review.
Technologies & role
- Go, gRPC, ffmpeg
- SQLite for metadata storage
- Consistent hashing for data placement
- HTML/CSS templated UI
Role: Solo developer – designed the architecture, implemented the backend services and client, and built the UI.
Architecture
The system consists of a frontend for uploads and playback, a set of storage nodes exposed via gRPC, and a coordinator that manages node membership and consistent hashing. Video files are segmented with ffmpeg and stored as chunks across the cluster, which enables fault tolerance and efficient streaming.
Key features
-
Consistent hashing
Ensures smooth data redistribution when scaling nodes up or down. -
MPEG-DASH style playback
ffmpeg-based segmentation for chunked video, improving perceived performance and allowing partial loading. -
Modular metadata layer
Metadata is abstracted so the backend can be swapped between SQLite and other data stores. -
Dynamic scaling
Nodes can be added or removed at runtime with minimal impact on availability. -
Intuitive UI
Simple upload and playback interface so the distributed system details stay behind the scenes.
Demo
Uploading a video
Uploading is straightforward: select a file in the top-right of the UI and press “Upload.” The server segments the video and distributes chunks across nodes.
During playback, only the needed chunks are fetched, which reduces bandwidth and speeds up start time.
Responsive UI
The homepage and video player adapt to different screen sizes, keeping controls readable and easy to use.
Handling node failures
When a node is taken down, playback continues because data is replicated across multiple nodes. This simulates real-world hardware failures and shows how redundancy maintains availability.
What I learned
This project deepened my understanding of how large-scale video platforms manage data distribution and performance. I implemented a distributed video service using consistent hashing to keep storage balanced and fault-tolerant, and configured custom gRPC limits plus ffmpeg integration for real-world constraints on video size and throughput.
From an architectural perspective, I designed a modular, service-oriented system that mirrors production-grade distributed infrastructures. Each component can operate independently while still composing into a coherent platform. I also developed a frontend using Go templates, which keeps the UI simple while wiring it directly into backend flows.
Request code access
If you would like temporary access to the TritonTube codebase for professional or review purposes, please fill out the form below: