A realtime CPU-GPU hybrid, multi-threaded video processing pipeline provides live mixing of video signals and their output with minimal delay. All processing happens in real-time without additional caching, which ensures smooth, frame-accurate rewinding and responsiveness to user actions.
In addition, Screenberry supports content of various frame rates and adaptively plays it to fit the system frame rate.
The 3D Scene feature and support for camera tracking protocols (Mo-Sys and FreeD) allows it to transform the content on LED screens in real time depending on the position and angle of the camera. This makes Screenberry suitable for xR TV studios and film sets.
There is integration with Unreal Engine for rendering complex and realistic 3D graphics and lighting.
Screenberry natively supports a wide range of capture devices via DirectShow, as well as Vision, Decklink, Deltacast, Magewell. Depending on the capture card, you can use GPU-Direct, which allows you to output the captured signal with minimal delay. Video capture includes up to 8K signals via DP 1.4, HDMI 2.1 or 4x12G-SDI.
There is support for streaming protocols such as NDI, RTSP, RTMP. The built-in web browser allows you to display and interact with web pages and compose them with videos.
Explore all features
From streaming and capture to real-time graphics and extended reality
Realtime flexible video signal mixing
There are different ways to play content in Screenberry, using playlists, timelines and matrices or a flexible combination of these approaches. Easy and flexible work with layers, timecode, alpha channel allows you to prepare broadcasts of any complexity. It is possible to work with external data from lighting or audio devices (DMX, Art-Net, MIDI, OSC), record this data in special Data tracks and play these tracks later. The Screenberry user interface can be flexibly customized to control a specific show.
Explore what you can create with ScreenberryCase Studies
Let's find the right solution for your projectGet in touch