Documentation
Complete technical reference for Statamic Video Tools.
Table of Contents
- Documentation
Installation
Install via Composer:
composer require eminos/statamic-video-tools
Publish the config file:
php artisan vendor:publish --tag=statamic-video-tools-config
Configure FFmpeg paths — the simplest approach is via environment variables in .env:
FFMPEG_BINARIES=/absolute/path/to/ffmpegFFPROBE_BINARIES=/absolute/path/to/ffprobe
If you used php artisan video-tools:download-ffmpeg (the recommended approach), the binaries land at storage/app/ffmpeg/ffmpeg and storage/app/ffmpeg/ffprobe. Set your .env accordingly:
FFMPEG_BINARIES=/var/www/your-site/storage/app/ffmpeg/ffmpegFFPROBE_BINARIES=/var/www/your-site/storage/app/ffmpeg/ffprobe
If FFmpeg is installed system-wide (e.g. via apt install ffmpeg), the default values (ffmpeg and ffprobe) work without any .env change.
Publishing the laravel-ffmpeg config file is not required — the env vars above are sufficient. Only publish it if you need to tune advanced options:
php artisan vendor:publish --provider="ProtoneMedia\LaravelFFMpeg\Support\ServiceProvider"
CPU thread limit — By default laravel-ffmpeg passes -threads 12 to FFmpeg. On a dedicated encoding server this is fine, but if FFmpeg shares a machine with Nginx, PHP-FPM, and your database, uncapped encoding will starve everything else during a conversion job.
After publishing the config, set ffmpeg.threads in config/laravel-ffmpeg.php to leave headroom for the rest of the stack:
// config/laravel-ffmpeg.php'ffmpeg' => [ 'threads' => 4, // tune to your server — a common rule of thumb: half the available cores // ...],
A value of 4 works well on most small-to-mid VPS instances (2–8 cores). On a dedicated encoding server you can raise it freely. To check how many cores your server has: nproc.
Configuration
All configuration lives in config/video-tools.php.
How the Three Levels Work Together
The addon has three distinct levels of preset configuration. Understanding them prevents a lot of confusion:
| Level | Where | Controls |
|---|---|---|
| Global presets | config/video-tools.php → presets |
The full catalogue of available encoding recipes. Nothing encodes from here alone. |
| Container presets | config/video-tools.php → containers |
The encoding gate. Determines which presets actually run when a video is uploaded to that container. This is the only thing that triggers encoding. |
| Field presets | Statamic blueprint → Assets field settings | Display filter only. Controls which completed conversions the {{ video }} tag renders in the frontend. Has no effect on what gets encoded. |
The mental model: think of global presets as a menu of options, container config as the kitchen deciding what to cook, and field config as the waiter deciding what to bring to the table.
Practical examples:
Hero section video (short, needs best quality): Container: hero_videos → presets: [av1_1080p, mp4_1080p] transcription: false Conference talk (long-form, needs captions + adaptive streaming): Container: talks → presets: [av1_1080p, mp4_1080p, hls_adaptive] transcription: true General uploads (team doesn't need all resolutions in the editor): Container: assets → presets: [av1_720p, mp4_720p] Field "Allowed Conversions": [av1_720p] ← only AV1 shown in tag output
A container that isn't listed in containers config is silently ignored — no jobs dispatched. A field with no "Allowed Conversions" selected shows all completed conversions.
Containers
Map each Statamic asset container handle to the list of preset names that should run when a video is uploaded. Only listed containers trigger conversion jobs.
'containers' => [ 'hero_videos' => [ 'presets' => ['av1_1080p', 'mp4_1080p'], 'transcription' => false, // never transcribe — overrides global setting ], 'talks' => [ 'presets' => ['av1_1080p', 'mp4_1080p'], 'transcription' => true, // always transcribe — overrides global setting ], 'assets' => [ 'presets' => ['av1_720p', 'mp4_720p'], // no 'transcription' key → inherits global transcription.enabled ],],
Transcription resolution order: container transcription key (if present) → global transcription.enabled. This lets you enable transcription globally and opt specific containers out, or keep it globally off and opt specific containers in.
Storage Filesystem
The Laravel filesystem disk where converted files are stored. Defaults to public.
'storage_filesystem' => env('VIDEO_TOOLS_STORAGE_FILESYSTEM', 'public'),
Queue worker timeout: Video encoding jobs can run for several minutes. Make sure your queue worker timeout is high enough — at least
3600(1 hour) is recommended:php artisan queue:work --timeout=3600With Horizon, set
timeoutin yourconfig/horizon.phpsupervisor config:'supervisor-1' => ['timeout' => 3600,// ...],The default timeout (60s) will cause long encodes to fail and retry repeatedly.
Custom VP9 presets: If you add VP9 presets, use constant-quality mode with
-b:v 0 -crf 30 -cpu-used 4. The-cpu-used 4flag is critical — without it VP9 encoding is extremely slow. Lower values (0–3) improve quality at the cost of speed; higher values (5–8) are faster but noticeably lower quality.
Detailed Logging
Enable verbose info/debug logging for conversion jobs. Error logs are always on.
'enable_detailed_logging' => env('VIDEO_TOOLS_DETAILED_LOGGING', env('APP_ENV') !== 'production'),
Poster
Controls automatic poster frame extraction. A single JPEG is generated per video (not per preset) and stored at full source resolution. Resizing happens on demand via Glide when the {{ video }} tag renders the poster attribute.
'poster' => [ 'enabled' => true, 'at' => 1, // Frame position: float seconds (1.5) or timecode string ('00:00:01') 'quality' => 90, // JPEG quality 1-100],
Set 'enabled' => false to disable poster generation entirely.
CP Thumbnail
Controls video thumbnails in the Statamic Control Panel asset browser and asset editor.
'cp_thumbnail' => [ 'enabled' => true, // false = videos show the default "no preview" icon 'animated' => false, // true = animated WebP loop instead of static poster JPEG],
When animated is false (default), the static poster is served through Glide at Statamic's built-in cp_thumbnail_small size (400×400 contain).
When animated is true, an animated WebP is generated via FFmpeg — ~10 evenly-spaced frames, 400×400, cycling at ~2fps. Because FFmpeg handles the sizing and encoding directly, this works on any server regardless of your PHP image driver (GD, ImageMagick, or libvips).
Transcription
Controls automatic AI transcription using whisper.cpp.
'transcription' => [ 'enabled' => false, 'binary' => env('WHISPER_BINARY', storage_path('app/whisper/whisper-cpp')), 'model' => env('WHISPER_MODEL', storage_path('app/whisper/models/ggml-base.bin')), 'language' => 'auto', // 'auto' = detect language, or ISO code e.g. 'en', 'sv', 'de' 'formats' => ['vtt', 'srt', 'txt', 'transcript', 'json'], 'timeout' => env('WHISPER_TIMEOUT', 3600), // max seconds whisper-cli may run (0 = unlimited) 'prompt' => '', // brand names / proper nouns — string or PromptResolver class 'prompt_fields' => ['alt'], // asset fields auto-prepended to prompt ([] to disable) 'substitutions' => [], // post-processing corrections: 'Correct' => ['wrong1', 'wrong2'] 'extra_args' => [], // any additional whisper-cli flags (see below)],
Set 'enabled' => true to transcribe videos on upload. Use video-tools:download-whisper to download the binary and model. See AI Transcription for full details.
Presets
Presets define the encoding settings for each conversion. Each preset requires a unique name. Multiple preset types are supported.
Output paths:
# Regular conversionsconversions/{original_filename}/{original_filename}_{preset_name}.{ext} # HLS playlists and segmentsconversions/{original_filename}/{preset_name}/{original_filename}_{preset_name}.m3u8conversions/{original_filename}/{preset_name}/{original_filename}_{preset_name}_*.ts
Options String Presets
Pass raw FFmpeg flags via the options key:
[ 'name' => 'mp4_1080p', 'extension' => 'mp4', 'options' => '-c:v libx264 -preset medium -crf 23 -vf scale=-2:1080 -c:a aac -b:a 128k -movflags +faststart',],
Handler Class Presets
For complex encoding logic, implement PresetHandler and reference it via handler:
[ 'name' => 'custom_encode', 'extension' => 'mp4', 'handler' => App\VideoPresets\MyCustomPreset::class,],
use Eminos\StatamicVideoTools\Contracts\PresetHandler;use ProtoneMedia\LaravelFFMpeg\Filesystem\Media; class MyCustomPreset implements PresetHandler{ public function apply(Media $media, array $arguments = []): Media { return $media->addFilter(['-c:v', 'libx264', '-crf', '23']); }}
HLS Adaptive Streaming
Use 'type' => 'hls' with a streams array to generate an HLS playlist with multiple quality levels:
[ 'name' => 'hls_adaptive', 'type' => 'hls', 'segment_duration' => 6, 'streams' => [ ['width' => 1920, 'height' => 1080, 'bitrate' => 3000, 'audio_bitrate' => 128], ['width' => 1280, 'height' => 720, 'bitrate' => 1500, 'audio_bitrate' => 128], ['width' => 854, 'height' => 480, 'bitrate' => 800, 'audio_bitrate' => 96], ], 'conditions' => ['has_alpha = false'],],
The .m3u8 playlist and all .ts segments are placed inside a subdirectory named after the preset (e.g. conversions/{basename}/hls_adaptive/). This means multiple HLS presets per video are fully supported — each gets its own isolated directory.
Note: For short background or hero videos, HLS is overkill. MP4 + WebM covers all browsers without any JavaScript. Use HLS for long-form content where adaptive bitrate switching benefits users on slower connections. In most cases one HLS preset is all you need — the bitrate ladder inside a single preset already handles quality adaptation automatically.
Transparent Video (Alpha Channel)
Transparent video — video with an alpha channel — requires two separate files because no single codec works in all browsers:
| Preset | Codec | Browser support |
|---|---|---|
alpha_mp4_hevc |
HEVC with alpha (hvc1 tag) |
Safari 13+, iOS 13+ |
alpha_webm_av1 |
AV1 with alpha (yuva420p) |
Chrome 70+, Firefox 67+, Edge 79+ |
Always use both presets together. Add both to your container's presets list:
'containers' => [ 'hero_videos' => [ 'presets' => ['alpha_mp4_hevc', 'alpha_webm_av1'], ],],
The {{ video }} tag and <x-video> Blade component automatically detect alpha presets. When any alpha_ conversion is completed, non-alpha sources are suppressed — only the alpha formats are rendered. This prevents the browser from playing an opaque fallback instead of the transparent version.
The has_alpha = true condition (set by default on both presets) ensures they only run on video files that actually contain an alpha channel. Uploading a regular opaque video skips both presets.
[ 'name' => 'alpha_mp4_hevc', 'extension' => 'mp4', 'options' => '-c:v libx265 -crf 20 -pix_fmt yuva420p -tag:v hvc1 -movflags +faststart', 'conditions' => ['has_alpha = true'],],[ 'name' => 'alpha_webm_av1', 'extension' => 'webm', 'options' => '-c:v libsvtav1 -preset 6 -crf 30 -b:v 0 -pix_fmt yuva420p -c:a libopus', 'conditions' => ['has_alpha = true'],],
Source asset requirements: the source file must have an alpha channel (e.g. exported from After Effects, Motion, or DaVinci Resolve as ProRes 4444 or PNG sequence). A
.movfile doesn't automatically have alpha — the pixel format must include alpha (yuva420p,rgba, etc.).
Conditions
Control whether a preset runs by checking asset properties. All conditions must pass. When they fail, the preset is marked skipped — not failed.
'conditions' => [ 'height > 1080', // Only for videos taller than 1080px 'has_alpha = false', // Skip videos with an alpha channel],
Supported properties:
| Property | Type | Description |
|---|---|---|
width |
integer | Video width in pixels |
height |
integer | Video height in pixels |
duration |
float | Duration in seconds |
size |
integer | File size in bytes |
codec |
string | Video codec name (e.g. h264, vp9) |
pix_fmt |
string | Pixel format (e.g. yuv420p, yuva420p) |
has_alpha |
boolean | Whether the video has an alpha channel |
Supported operators: >, <, =, ==, >=, <=, !=, in
// 'in' operator — value must be a JSON array'conditions' => ['codec in ["h264","hevc"]'],
Custom condition handler:
'condition_handler' => App\VideoConditions\MyCondition::class,
use Eminos\StatamicVideoTools\Contracts\ConditionHandler;use Statamic\Assets\Asset; class MyCondition implements ConditionHandler{ public function passes(Asset $asset, array $preset): bool { return $asset->get('encode_video') === true; }}
Poster & Thumbnail Generation
A poster image (JPEG) is extracted from the video once per asset and stored at full source resolution. It serves two purposes:
{{ video }}tag — used as theposterattribute on the<video>element, with optional on-demand Glide resizing- CP asset browser — injected as the video thumbnail so assets display a real preview frame instead of a broken image
The poster is stored at:
conversions/{original_filename}/{original_filename}_poster.jpg
And recorded in asset metadata under the video_poster key.
Configuration is in the top-level poster block — see Poster under Configuration.
Animated WebP for the CP is a separate optional output controlled by cp_thumbnail.animated. When enabled, FFmpeg generates a looping animated WebP at 400×400 (Statamic's CP thumbnail size) and stores it under the video_cp_thumbnail key. This replaces the static poster in the asset browser only — the {{ video }} tag always uses the static JPEG poster.
AI Transcription
How It Works
Transcription uses whisper.cpp — a standalone C++ binary of OpenAI's Whisper model. No Python, no pip, no cloud API. Download the binary and a model file once with the artisan command, then enable transcription in config.
On upload, a GenerateTranscriptionJob queued job runs whisper.cpp against the source video file, generates all configured output formats in a single pass, and stores them alongside your conversion files.
Setup:
# Download the binary and default (base) modelphp artisan video-tools:download-whisper # Or choose a different model sizephp artisan video-tools:download-whisper --model=smallphp artisan video-tools:download-whisper --model=medium
Then set 'enabled' => true in config/video-tools.php under transcription.
Model sizes:
| Model | Size | Speed | Accuracy |
|---|---|---|---|
tiny |
~75 MB | Fastest | Lower |
base |
~150 MB | Fast | Good (default) |
small |
~500 MB | Moderate | Better |
medium |
~1.5 GB | Slow | Very good |
large-v2 |
~3.0 GB | Slowest | Best (older release) |
large-v3 |
~3.0 GB | Slowest | Best |
large-v3-turbo |
~1.5 GB | Fast | Good (distilled) |
For most content base or small is the right balance. Use large-v3 for high-accuracy subtitles, multi-speaker content, or improved proper noun recognition. large-v3-turbo is a distilled version — faster but loses fine detail like music annotations and natural sentence boundaries.
Queue timeout: The medium and large models can take several minutes per video. If you see
MaxAttemptsExceededExceptionin your logs, setREDIS_QUEUE_RETRY_AFTER=3600in your.envto prevent Horizon from re-queuing jobs it thinks are stuck.
Vocabulary Hints (Prompt)
Whisper can be given an initial prompt to improve recognition of brand names, product names, proper nouns, and technical vocabulary that the model might otherwise mishear.
Global prompt — applied to every transcription:
// Plain string'prompt' => 'Pelion, Statamic, kiwikiwi', // Or a class for dynamic prompts (looks up entries, globals, etc.)'prompt' => \App\VideoTranscriptionPrompt::class,
The class must implement \Eminos\StatamicVideoTools\Contracts\PromptResolver:
use Eminos\StatamicVideoTools\Contracts\PromptResolver;use Statamic\Assets\Asset; class VideoTranscriptionPrompt implements PromptResolver{ public function resolve(Asset $asset): string { // Example: pull brand names from a globals set return GlobalSet::find('brand')?->inDefaultSite()?->get('keywords', ''); }}
Per-asset prompt via asset fields — the prompt_fields config automatically prepends asset field values to the prompt. The default is ['alt'], so if an asset has alt text set (e.g. "Pelion brand testimonial video"), that text is included automatically:
'prompt_fields' => ['alt'], // default — uses alt text'prompt_fields' => ['alt', 'title'], // multiple fields'prompt_fields' => [], // disable auto-fields entirely
Per-field prompt — each Assets field in a blueprint can have its own prompt set in the CP blueprint editor. Open the field settings and look for Transcription Prompt under the Video Tools section. This overrides nothing — all sources are combined.
Final prompt order: asset field values → per-container field prompt → global config prompt. Duplicates are removed and all parts are joined with , .
Post-Processing Substitutions
Whisper occasionally misspells brand names, proper nouns, and technical terms — especially with smaller models. The substitutions config applies find-and-replace corrections to all output files after whisper finishes, so the fix is model-independent and guaranteed.
'substitutions' => [ 'Pelion' => ['Pellin', 'Pelean', 'Pellion'], 'Statamic' => ['Statamik', 'Staticmic'],],
Format: 'CorrectSpelling' => ['misspelling1', 'misspelling2', ...]. Matching is case-insensitive and whole-word only — 'ion' won't accidentally match inside billion. All five output formats (VTT, SRT, TXT, transcript, JSON) are corrected in the same pass.
Extra Arguments
Pass any whisper-cli flag directly via extra_args. Each flag and its value must be a separate array entry — the array is merged into the command verbatim, so you have full control over the whisper process without waiting for the addon to expose individual config keys.
'extra_args' => [ '--suppress-nst', // strip [MUSIC PLAYING], [APPLAUSE] etc. '--max-len', '42', // max characters per subtitle segment '--split-on-word', // break on word boundaries (use with --max-len) '--beam-size', '8', // larger beam search — slower, more accurate],
Full flag reference: whisper.cpp CLI docs
Note:
language,prompt,formats, andsubstitutionshave dedicated config keys because they involve special handling (PromptResolver class support, format mapping, post-processing). Useextra_argsfor everything else.
Output Formats
All formats are generated in a single whisper.cpp run at zero extra performance cost. Control which formats are saved via the formats array in config.
| Format | Extension | Description |
|---|---|---|
vtt |
.vtt |
WebVTT subtitles — used for <track> in the video tag |
srt |
.srt |
SubRip subtitles — for video editors and external players |
txt |
.txt |
Plain text transcript with no timestamps |
transcript |
.transcript.txt |
Formatted paragraphs with [H:MM:SS] markers (generated from JSON) |
json |
.json |
Full word-level data with confidence scores |
Files are stored at:
conversions/{original_filename}/{original_filename}_transcription.{ext}
Formatted Transcript
The transcript format is generated in PHP from the raw JSON output. Whisper segments are grouped into paragraphs (new paragraph after a pause of >1.5 seconds or every ~5 segments), with a timestamp marker at the start of each paragraph:
[0:00] Welcome to the show. Today we're talking about video encodingand how it works in modern web development. [0:42] The key thing to understand is that different codecs servedifferent purposes depending on your use case. [1:15] HLS adaptive streaming is particularly useful for long-formcontent where viewers might be on varying connection speeds.
This plain text is available as transcript.formatted inside the {{ video }} pair tag (see below).
HTML Transcript
transcript.html is a server-generated HTML version of the formatted transcript, ready to drop into a template and style with Tailwind child selectors — no client-side parsing needed.
Each paragraph becomes a <div> containing a <button> (the timestamp, with data-seconds for seeking) and a <span> (the text):
<div data-seconds="0"> <button type="button" data-seconds="0" data-time="0:00">0:00</button> <span>Welcome to the show. Today we're talking about video encoding...</span></div><div data-seconds="42"> <button type="button" data-seconds="42" data-time="0:42">0:42</button> <span>The key thing to understand is that different codecs...</span></div>
Output it with {{ transcript.html | raw }} (Antlers) or {!! $transcript['html'] !!} (Blade), then add a single delegated click listener to wire up the seek behaviour:
const video = document.querySelector('video');document.getElementById('transcript').addEventListener('click', function (e) { const btn = e.target.closest('[data-seconds]'); if (btn) { video.currentTime = parseFloat(btn.dataset.seconds); video.play(); }});
Style the entire transcript from the container using Tailwind child selectors:
<div id="transcript" class="[&>div]:flex [&>div]:gap-3 [&>div]:mb-4 [&_button]:font-mono [&_button]:text-sm [&_button]:text-white/60 [&_span]:text-white/85"> {{ transcript.html | raw }}</div>
The {{ video }} Tag
Renders a <video> element with optimally ordered <source> tags from an asset's conversions.
Basic Usage
{{ video :asset="hero_video" autoplay muted loop playsinline class="w-full" }}
Output:
<video autoplay muted loop playsinline class="w-full" poster="/conversions/hero/hero_poster.jpg"> <source src="/conversions/hero/hero_av1_1080p.webm" type="video/webm"> <source src="/conversions/hero/hero_mp4_1080p.mp4" type="video/mp4"></video>
Short Syntax
Reference an asset field directly:
{{ video:hero_video autoplay muted loop }}
Parameters
| Parameter | Description |
|---|---|
asset |
Required. The video asset — an Asset object or an asset ID string. |
field |
Optional. Handle of the Assets field in the current entry's blueprint. Enables per-field output filtering. |
hls |
When present, outputs only the HLS <source> (type application/x-mpegURL). Falls back to all sources if no HLS conversion exists. |
attr |
Space-separated list of boolean HTML attributes to add — e.g. attr="controls autoplay muted". Useful when attribute names are dynamic or come from a variable. |
poster:width |
Resize poster width via Glide (pixels) |
poster:height |
Resize poster height via Glide (pixels) |
poster:quality |
Poster JPEG quality for Glide output (1-100) |
poster:fit |
Glide fit mode: contain, max, fill, stretch, crop |
track:kind |
<track> kind attribute (default: subtitles) |
track:default |
Add default attribute to the generated <track> element (default: false) |
| any others | Passed as HTML attributes on the <video> element. Boolean attributes (autoplay, muted, loop, playsinline, etc.) are rendered without a value. |
Poster resizing example:
{{ video:hero_video poster:width="1280" poster:quality="80" autoplay muted }}
Pair Tag Mode
When a closing tag is added, {{ video }} switches to pair tag mode: no <video> element is rendered automatically — instead, all video data is injected into the inner template context so you build the markup yourself.
{{ video :asset="hero_video" }} <video controls poster="{{ poster_url }}"> {{ sources }} <source src="{{ url }}" type="{{ type }}"> {{ /sources }} </video> {{ if transcript.html }} <div id="transcript" class="[&>div]:flex [&>div]:gap-3 [&>div]:mb-4 [&_button]:font-mono [&_span]:text-white/85"> {{ transcript.html | raw }} </div> {{ /if }} {{ /video }}
Variables Available Inside the Pair Tag
| Variable | Description |
|---|---|
sources |
Array of { url, type } — loop to render <source> elements |
poster_url |
Poster image URL (respects poster:width / poster:quality params) |
transcript.language |
Detected language code (e.g. en, sv) |
transcript.html |
Server-generated HTML transcript — ready to render with Tailwind child selectors |
transcript.formatted |
Plain text transcript with [H:MM:SS] paragraph markers |
transcript.formatted_url |
URL to the plain text formatted transcript file |
transcript.vtt |
WebVTT file content |
transcript.vtt_url |
WebVTT file URL |
transcript.srt |
SRT file content |
transcript.srt_url |
SRT file URL |
transcript.text |
Plain text transcript (no timestamps) |
transcript.text_url |
Plain text URL |
transcript.json_url |
JSON URL (word-level data with confidence scores) |
Variables are only set for formats that were generated. When no completed transcription exists, transcript is an empty array.
Source Ordering
Sources are ordered so browsers pick the most efficient codec they support:
- AV1 / WebM — Chrome 70+, Firefox 67+, Edge 79+, Safari 17+
- VP9 / WebM — Chrome 25+, Firefox 28+, Edge 14+ (if you add custom VP9 presets)
- HEVC / MP4 — Safari / iOS (used automatically for alpha presets)
- H.264 / MP4 — universal fallback, all browsers
- HLS playlist — always last; requires the
hlsparam on the tag
Per-Field Output Filtering
When the field parameter is set, the tag reads the Assets field's blueprint config to filter sources:
{{ video :asset="hero_video" field="hero_video" autoplay muted }}
In the blueprint editor, each Assets field gains two extra settings added by this addon:
- Allowed Conversions — multi-select of preset names; only selected presets are included in
<source>output (empty = all). HLS presets are intentionally excluded from this list — use thehlsparam on the tag to get HLS output. - Transcription Prompt — per-field override for the Whisper transcription prompt. Overrides the global prompt in
config/video-tools.phpfor assets managed by this field.
These settings control tag output and transcription hints only — they do not affect which conversions run on upload.
HLS Playback
Safari plays HLS natively. Chrome, Firefox, and Edge on desktop require a JavaScript library. The simplest option is hls.js:
Players like Plyr, Video.js, and Fluid Player also support HLS out of the box.
HLS + subtitles: If you use hls.js alongside VTT transcriptions, be aware that hls.js resets the
<video>element's text track mode during MediaSource setup —<track>elements added to the static HTML may be ignored. Inject them programmatically inside theHls.Events.MANIFEST_PARSEDcallback instead. Refer to the hls.js docs for details.
Fallback Behaviour
- If no conversions exist yet, falls back to a
<source>pointing to the original asset URL. - If all conversions are filtered out by field config, falls back to the original URL.
- If the tag fails for any reason, returns an empty string — never throws.
Customising the Template
The self-closing {{ video }} tag renders via resources/views/video.antlers.html. Publish it to override:
php artisan vendor:publish --tag=statamic-video-tools-views
The published file lands at resources/views/vendor/statamic-video-tools/video.antlers.html and takes precedence over the addon default. The file contains a full variable reference in the header comment.
The <x-video> Blade Component
A Blade component that produces identical output to the {{ video }} tag. Useful in Blade templates, Livewire components, or anywhere you prefer PHP-side rendering.
Basic Usage
<x-video :asset="$heroVideo" controls autoplay muted loop playsinline />
Boolean attributes (controls, autoplay, muted, loop, playsinline, disablepictureinpicture, disableremoteplayback) are rendered as bare HTML boolean attributes — not controls="controls".
Any attribute not listed above is passed through to the <video> element as-is:
<x-video :asset="$heroVideo" controls class="w-full rounded-xl" data-player="true" />
Component Props
These are named component props, not HTML attributes — pass them with : binding or plain values:
| Prop | Type | Description |
|---|---|---|
asset |
Asset|string | The video asset or asset ID string |
hls |
bool | Output only the HLS <source> (same behaviour as the hls param on the Antlers tag) |
poster-width |
int | Resize poster width via Glide (pixels) |
poster-height |
int | Resize poster height via Glide (pixels) |
poster-quality |
int | Poster JPEG quality via Glide (1–100) |
poster-fit |
string | Glide fit mode: contain, max, fill, stretch, crop |
track-kind |
string | <track> kind attribute (default: subtitles) |
track-default |
bool | Add default attribute to the generated <track> (default: false) |
<x-video :asset="$video" :poster-width="1280" :poster-quality="80" controls />
Transcription Track
When a VTT transcription exists on the asset, a <track> element is automatically included:
<video controls> <source src="/conversions/..." type="video/webm"> <source src="/conversions/..." type="video/mp4"> <track kind="subtitles" src="/conversions/..._transcription.vtt" srclang="en" label="English"></video>
Customising the Component Name
The component is registered as <x-video> by default. Change the name in config/video-tools.php:
'blade_component' => 'my-video', // <x-my-video :asset="..." />'blade_component' => false, // disable registration entirely
Publishing the Views
To customise the rendered HTML, publish both view templates:
php artisan vendor:publish --tag=statamic-video-tools-views
This copies two files into your project, where they take precedence over the addon defaults:
| Published path | Used by |
|---|---|
resources/views/vendor/statamic-video-tools/video.antlers.html |
{{ video }} self-closing Antlers tag |
resources/views/vendor/statamic-video-tools/components/video.blade.php |
<x-video> Blade component |
Both files contain a full variable reference in the header comment. Edit either (or both) to customise the HTML output — add CSS classes, wrap in a container, change the <track> attributes, etc.
Using in Antlers
Blade components are fully accessible in Antlers templates via the <x-...> syntax:
<x-video :asset="hero_video" controls autoplay muted />
CP Fieldtype
Add the video_tools_status fieldtype to an asset blueprint to see live conversion status in the Control Panel.
Setup:
- Go to Content > Assets > [Your Container] > Edit Blueprint
- Add a field with type Video Tools Status
- Save
Asset Editor Status Panel
Shown when opening an asset. Displays a status card for each configured preset:
- Status badges: Completed (green), Processing/Pending/Queued (blue), Failed (red), Skipped (yellow), Not Started (gray), File Missing (orange)
- Completed presets: file size, codec, resolution, duration, download button, copy-URL button
- HLS presets: stream count, quality level badges (e.g.
1080p · 3000k), copy-m3u8-URL button - Skipped presets: the exact failed condition is shown (e.g. "Skipped because of condition:
height > 1080") - Re-run button per preset to manually re-queue a single conversion
- Re-run All button to re-dispatch every preset for the asset
- Live polling — while any preset is in a non-terminal state (pending, processing), the panel polls automatically and updates statuses in real time without a page refresh
Asset Listing Column
Add the fieldtype as a column in the assets browser for an at-a-glance overview:
- A single colored icon represents the overall state: green checkmark (all done), blue spinner (in progress), red warning (any failed), gray dots (not started)
- Click the icon to open a popover with per-preset status details
Transcription Panel
When transcription.enabled is true, a separate Transcription panel appears below the conversions panel:
- Status badge: Not Generated / Processing / Completed / Failed
- Completed: detected language, one row per generated format (VTT, SRT, TXT, Transcript, JSON) each with a download button and copy-URL button
- Failed: error message from the last attempt
- Generate / Re-generate button to trigger or re-trigger transcription for the asset
Artisan Commands
Download Whisper
Download the whisper.cpp binary and a Whisper model file:
php artisan video-tools:download-whisper
On Linux, this compiles whisper.cpp from source to ensure you always get the latest version. Requires cmake, g++, and git:
apt-get install cmake build-essential git
On macOS and Windows, a pre-built binary is downloaded from the official ggml-org/whisper.cpp GitHub releases.
Options:
| Option | Description |
|---|---|
--model=base |
Model size: tiny, base, small, medium, large-v2, large-v3, large-v3-turbo (default: base) |
--force |
Re-compile/re-download even if binary/model already exist |
--prebuilt |
Linux only: skip compilation and use a pre-built binary instead (no build tools required) |
After downloading, the paths are pre-configured in the default config. To use custom paths, set WHISPER_BINARY and WHISPER_MODEL in your .env.
Download FFmpeg
php artisan video-tools:download-ffmpeg
Downloads static FFmpeg and FFprobe binaries. Options:
| Option | Description |
|---|---|
--force |
Force re-download even if binaries already exist |
--path=/custom/dir |
Download to a custom directory instead of the default storage/app/ffmpeg/ |
Process Videos
Re-process existing video assets:
php artisan video-tools:process
Options:
| Option | Description |
|---|---|
--container=assets |
Limit to a specific container handle |
--preset=mp4_1080p |
Limit to a specific preset name |
--force |
Re-dispatch all jobs regardless of existing conversion status |
--dry-run |
Preview what would be dispatched without actually dispatching |
Events
Video Tools fires two Laravel events you can listen to in your application's EventServiceProvider (or via Event::listen() in a service provider).
VideoConversionCompleted
Fired when a single preset finishes encoding successfully. Not fired for failed or skipped presets.
Eminos\StatamicVideoTools\Events\VideoConversionCompleted
| Property | Type | Description |
|---|---|---|
$asset |
Statamic\Assets\Asset |
The asset that was encoded |
$presetName |
string |
The preset name, e.g. av1_1080p |
$conversionData |
array |
The full conversion metadata written to the asset (url, codec, dimensions, etc.) |
VideoAssetProcessed
Fired once when all tracked jobs for an asset have reached a terminal state — every configured conversion preset (completed, failed, or skipped), plus the poster job (if enabled), plus the transcription job (if enabled). The animated CP thumbnail job is intentionally excluded.
This is the right event to hook into for static cache invalidation.
Eminos\StatamicVideoTools\Events\VideoAssetProcessed
| Property | Type | Description |
|---|---|---|
$asset |
Statamic\Assets\Asset |
The fully processed asset |
The event is detected via metadata inspection rather than an external counter, so it fires correctly for initial uploads, CP-triggered reruns, and artisan batch reprocessing — with no extra configuration.
Listening to Events
Register listeners in your App\Providers\AppServiceProvider (or a dedicated EventServiceProvider):
use Eminos\StatamicVideoTools\Events\VideoAssetProcessed;use Eminos\StatamicVideoTools\Events\VideoConversionCompleted;use Illuminate\Support\Facades\Event; public function boot(): void{ // React when a single preset finishes Event::listen(VideoConversionCompleted::class, function ($event) { logger("Preset {$event->presetName} done for {$event->asset->path()}"); }); // React when all processing is complete Event::listen(VideoAssetProcessed::class, function ($event) { // ... });}
Static Cache Integration
If you use Statamic's static cache, hook into VideoAssetProcessed to invalidate pages that embed the processed video. The simplest approach is to flush all cached URLs that contain the asset — typically the entries that reference it via an Assets field.
Flushing the full static cache on completion:
use Eminos\StatamicVideoTools\Events\VideoAssetProcessed;use Illuminate\Support\Facades\Event;use Statamic\StaticCaching\StaticCacheManager; Event::listen(VideoAssetProcessed::class, function ($event) { app(StaticCacheManager::class)->flush();});
Invalidating only the pages that reference the asset (more surgical):
use Eminos\StatamicVideoTools\Events\VideoAssetProcessed;use Illuminate\Support\Facades\Event;use Statamic\Facades\Entry;use Statamic\StaticCaching\StaticCacheManager; Event::listen(VideoAssetProcessed::class, function ($event) { $cache = app(StaticCacheManager::class); $assetUrl = $event->asset->url(); // Find entries that reference this asset and invalidate their URLs Entry::all()->each(function ($entry) use ($cache, $assetUrl) { $entryData = $entry->data()->toArray(); $content = json_encode($entryData); if (str_contains($content, $event->asset->path())) { $url = $entry->absoluteUrl(); $cache->invalidateUrl($url); } });});
Note: The surgical approach requires querying entries on every
VideoAssetProcessedevent. For sites with many entries, the full flush is simpler and fast enough if encoding jobs are infrequent.
Asset Metadata Structure
Conversion results are stored on the asset under the video_conversions key. Poster and CP thumbnail are stored as separate top-level keys.
// Per-preset conversion data (video_conversions)'mp4_1080p' => [ 'status' => 'completed', // pending | processing | completed | failed | skipped 'filename' => 'hero_mp4_1080p.mp4', 'path' => 'conversions/hero/hero_mp4_1080p.mp4', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_mp4_1080p.mp4', 'size' => 4823142, 'codec' => 'h264', 'width' => 1920, 'height' => 1080, 'duration' => '12.5', 'type' => 'video/mp4',], // HLS conversion'hls_adaptive' => [ 'status' => 'completed', 'type' => 'hls', 'path' => 'conversions/hero/hls_adaptive/hero_hls_adaptive.m3u8', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hls_adaptive/hero_hls_adaptive.m3u8', 'streams' => [...], 'stream_count' => 3, 'segment_duration' => 6,], // Poster frame (video_poster)'video_poster' => [ 'path' => 'conversions/hero/hero_poster.jpg', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_poster.jpg',], // Animated CP thumbnail — only present when cp_thumbnail.animated = true (video_cp_thumbnail)'video_cp_thumbnail' => [ 'path' => 'conversions/hero/hero_cp_thumb.webp', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_cp_thumb.webp',], // Transcription — only present when transcription.enabled = true and job completed (video_transcription)'video_transcription' => [ 'status' => 'completed', // or 'failed' 'language' => 'en', // detected language code 'vtt' => [ 'path' => 'conversions/hero/hero_transcription.vtt', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_transcription.vtt', ], 'srt' => [ 'path' => 'conversions/hero/hero_transcription.srt', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_transcription.srt', ], 'txt' => [ 'path' => 'conversions/hero/hero_transcription.txt', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_transcription.txt', ], 'transcript' => [ 'path' => 'conversions/hero/hero_transcription.transcript.txt', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_transcription.transcript.txt', ], 'json' => [ 'path' => 'conversions/hero/hero_transcription.json', 'disk' => 'public', 'url' => 'https://example.com/conversions/hero/hero_transcription.json', ], // Only keys for formats listed in config('video-tools.transcription.formats')],
Environment Variables
| Variable | Default | Description |
|---|---|---|
VIDEO_TOOLS_STORAGE_FILESYSTEM |
public |
Filesystem disk for converted files |
VIDEO_TOOLS_DETAILED_LOGGING |
true in non-production |
Enable verbose info/debug logs |
WHISPER_BINARY |
storage/app/whisper/whisper-cpp |
Path to the whisper.cpp binary |
WHISPER_MODEL |
storage/app/whisper/models/ggml-base.bin |
Path to the Whisper model file |
WHISPER_TIMEOUT |
3600 |
Max seconds a whisper-cli process may run before being killed (0 = unlimited) |