Some techniques for drawing text in GLSL

Rendering TG Character Test.fs in the ISF Editor utility.

While GLSL is not a particularly great language for rendering text strings, many shader developers have found it a fun challenge to do so anyway. Over the years we’ve run into several interesting blog posts and examples from people showing off different techniques that they’ve come up with. In this write up we are going to look at three ‘common’ approaches to drawing text in GLSL along with how the concepts can be adapted when making ISF shaders:

  • Using a bitmap image of characters as a lookup.

  • Encoding the bytes from a bitmap font in an array as part of the shader.

  • Individually drawing each character in GLSL.

As with most coding problems even within these general techniques there are several ways the objective can be solved, and no doubt other clever developers out there will continue to come up with other approaches that will leave our mouths agape.


Rendering text in GLSL using fonts encoded in bitmap images

The first technique we will look at is very well covered in the blog post Text Rendering by Jon Baker. describes how a monospaced font can be encoded into a bitmap image which is read from as part of the shader: “we're using a bitmapped monospaced font, where every font glyph is 6 pixels wide and 8 pixels tall.”

Within the shader itself each “cell” can determine its own local pixel coordinates and then look up the corresponding pixel from the provided bitmap image for the desired ASCII character to display.

ISF allows us to include images and reference them in the JSON blob, which you can find in the top portion of the JB Monospace Character Test.fs example.

/*{
    "CATEGORIES": [
        "Text"
    ],
    "CREDIT": "Jon Baker, adapted by VIDVOX",
    "IMPORTED": {
        "fontImage": {
            "PATH": "jbakermonospaced.png"
        }
    },
     "DESCRIPTION": "Test for drawing characters from a bitmap font stored as an image.",
    "INPUTS": [
 ...
    ],
    "ISFVSN": "2"
}
*/

The bitmap image containing the font.

This basic test shader provides us with a slider (float) with a range of 0 to 255 for selecting which character we want to repeat in each cell. It also includes a toggle button (boolean) for enabling the debug mode for visualizing the local pixel coordinates for each cell. When loaded into a host app like VDMX, it looks like this:

Rendering JB Monospace Character Test.fs in VDMX6 with the debugOverlay enabled to show the local pixel coordinates for each cell.

Building on the test shader we can create something more complex, such as the JB Monospace Random Characters.fs example which renders an image full of random characters each with a random color. For this ISF the debug option has been extended to show either the local coordinates for each cell or the coordinates of each cell within the whole grid.

Rendering JB Monospace Random Characters.fs in VDMX. Includes sliders for adjusting the random seeds of the characters independently from the hues.


Encoding the bytes from a bitmap font in an array as part of the shader

The character A encoded in 16 bytes, stored as an uvec4, that’s 4 uints with each 4 bytes, from Texture-less Text Rendering.

Another approach that is well documented involves taking the data from a bitmap font file and encoding the byte values directly into the shader as an array. Here we are going to mainly look at two blog posts:

  • Tim Gfrerer’s wonderful post Texture-less Text Rendering uses the PSF1 and an in depth explanation of their implementation as a GLSL shader.

  • Jon Baker has made a beautiful adaptation of Code page 437, an extended ASCII table which shipped with the IBM PC, written as a GLSL shader. In the blog post Siren: Masks Planes he details the process of how he came up with and implemented the idea.

The Texture-less Text Rendering approach described by Tim has four steps:

  1. Get the bitmap data from the font file.

  2. Embed the byte data as an array in the shader file.

  3. Use another array to hold the character code values for a word.

  4. Looking up the character data for each index in the word array.

Using the ISF Editor to translate syntax between variations of GLSL.

While ISF allows for a custom vertex shader, to keep things simple for future remixing, for the TG Character Test.fs adaptation we put everything in the fragment shader. Starting from the debug_text.frag in the GitHub repository referenced in the blog post, there were a number of minor syntax changes that were needed to adapt this for ISF. Fortunately they were all fairly easy to handle by hand using the error messages in the ISF Editor utility.

Like with the basic test shader using the bitmap lookup, before getting into Tim’s approach for displaying full words and phrases, it is useful to start with an example that just generates a display for every possible character, along with the ability to test displaying a specific character. This makes it easy to verify each part of the shader is working as expected and validate the drawing.

With this working, creating an extra constant to hold the full phrase for display and swapping it out for our debug code, we now have a full implementation of Tim’s technique in TG Message Test.fs. This trick for holding a phrase as character codes in an array can also be used along with the bitmap image approach above.

Code page 437 font.

Jon Baker uses a similar approach for adapting the Code page 437 font to use a single fragment shader. The blog post diving into how it was developed includes example code shared on ShaderToy. Converting this to ISF is a fun exercise. The big detail is that when using this in VDMX6, the ‘char’ variable is a restricted name by Metal, so we have to change that to something else. Fortunately the error log points this out for us when we try to load it. The adapted shader, JB Code Page 437 Character Test.fs, is set up to just repeat the same character in each cell.

JB Code Page 437 Character Test.fs loaded in VDMX.


Drawing individual characters with GLSL code

Perhaps the most tedious version approach that we will look at in this discussion is using code to render out specific characters that are needed. Two examples of this are included in the standard set of ISF shaders:

  • Digital Clock.fs

  • ASCII Art.fs

In Digital Clock.fs, only the digits 0-9 and a colon are needed for display, so while this can be time consuming to write, there are only a handful of cases to deal with. The functions found in this example can be very useful in situations where only numbers are appropriate to be rendered.

Function for drawing a digit in Digital Clock.fs.

Rendering Digital Clock.fs in VDMX with the Bad TV effect added.

In ASCII Art.fs, the code for each character rendered was created by movAX13h using a custom tool to generate the code snippets for the supported characters: “: * o & 8 @ #” – while this is not useful for general purpose text writing, it works great in this use case where only a few are needed to match up with different brightness levels.

Another example of this approach in action can be found in the Text with Truchets Demo on ShaderToy.


Why convert these to ISF?

ISF (the Interactive Shader Format) is a useful standard for creating ‘write once’ shaders that can be used across different host applications as generators and filters. It allows for playing with and remixing GLSL based compositions without the extra effort of having to build an environment for rendering shaders and all the related code to get a basic pipeline running.


Other approaches…

These are of course not the only ways to go about drawing text in GLSL, for example Jazz Mickle has a great post describing their bitmap font renderer for Unity that is worth reading. If you have seen any others that we should check out, please send us an email with a link!

Visualizing and adjusting color levels with the VDMX Scopes plugin

The Scopes plugin in VDMX6.1 provides a set of powerful real-time tools for visualizing the color data of your video streams. These scopes can be incredibly useful when used alongside color adjustment effects like Color Controls or LGG, helping you fine-tune brightness, hue, and balance for live visuals or studio work.

You can quickly explore the plugin using the “Scopes Demo” template found in the Templates menu, or follow along with the video tutorial to build your own customized setup from scratch.

Main Features of the Scopes Plugin

The Scopes plugin interface provides:

  • FPS Menu – Adjust the update rate for preview refreshes.

  • Video Source – Choose which video source to analyze (layers, cameras, or other video taps).

  • Display Mode – View all scopes at once or focus on a single visualization.


VDMX6 Scopes Plugin Vectorscope Waveform Point Scope

Visualization Types

The plugin includes three distinct scope modes for analyzing video:

  • Waveform Scope

    • Displays brightness and color distribution across the frame.

    • Can be switched between Waveform and Parade mode.


  • Point Scope

    • Draws color information as points with adjustable point size.

    • Can be toggled between YCbCr and HSV color spaces.


  • Vector Scope

    • Visualizes color hue and saturation as lines.

    • Supports both YCbCr and HSV display options.


Additional Controls in the Inspector

The Scopes plugin also offers fine-grain control through its Inspector panel:

  • Publish Scopes as Video Taps – Each scope visualization can be published as a separate video source, making it available to layers and video receivers throughout your VDMX project.

  • Graticules – Optionally overlay reference lines on published taps.

  • Colorized or Black & White Modes – Choose how the scopes are displayed based on your workflow needs.

  • Enable Toggles – Individually turn scopes on or off to optimize system performance when needed.


Practical Applications

  • Fine-tune your color corrections live by monitoring scopes while adjusting Color Controls FX or LGG settings.

  • Send scopes out to an external display for technical monitoring or visual aesthetics.

  • Create creative feedback loops by compositing scopes into your main output.


Try It Yourself

Explore the Scopes Demo template included in VDMX6.1, or create your own setup based on your project’s needs. Whether you’re perfecting a broadcast feed, building dynamic art installations, or tuning visuals for live performance, the Scopes plugin offers a whole new level of control and creative possibility.

Download the ISF Video Pattern Test Generator here.

More Information:

Download the latest version of VDMX and start experimenting today at vidvox.net.

If you create something cool using the Scopes plugin, be sure to tag us [@VIDVOX]—we’d love to see what you’re building.

Tracking faces, bodies, and hands with VDMX

The Video Tracking plugin provides an interface for detecting faces, bodies, and hands and using their locations as data-source values and masks that can be used to control virtually any part of VDMX.

In this tutorial we will look at using the face and hand tracking options in the Video Tracking to create a fun example of controlling the parameters of a simple ISF generator.

Download ISF Shader and Project Files here.



The main panel for the Video Tracking plugin contains a pop-up menu for selecting which video feed to analyze and a status indicator for the analysis.

For situations where multiple bodies and / or faces are detected, the provided 'prev', 'next', and 'rand' buttons can be used to switch between which is currently being tracked. When previewing the body and face tracking video streams clicking on a bounding box region can also be to change the active tracking.

The tracking options are as follows:

  • Human Tracking: Detects human bodies and publishes the position as data-sources. This includes the center position, width, height, and the x/y coordinates of each corner of the box bounding detected people. A boolean 'Detected' data-source turns on / off when bodies are found.

  • Face Tracking: Detects faces and publishes the position as data-sources. This includes the center position, width, height, and the x/y coordinates of each corner of the box bounding detected face. A boolean 'Detected' data-source turns on / off when faces are found.

  • Hand Detect: Detects one or more hands and publishes the center position as data-sources. Includes options for specifying the maximum number of tracked hands and the chirality type (All, Left, Right, or Pairs). A boolean 'Detected' data-source turns on / off when hands are found.

  • Human Mask: Generates a mask image that can be used e.g. with the Layer Mask effect to remove the background from images. Includes a quality setting (accurate, balanced, or fast). Also see the 'Remove Background' effect.

  • Attention Saliency: Generates a mask image using an Attention Saliency algorithm.

  • Object Saliency: Generates a mask image using an Object Saliency algorithm.

Where available the 'Publish Preview' option can be enabled for each tracking mode. This will create a video feed with overlays showing bounding boxes and other visualizations related to the analyzed images.

The "Minimum Latency" option in the inspector can be enabled to reduce latency (masks will be tighter) but takes longer to process (framerate may be lower, depending on the overall system load). This is option is off by default.

Hand tracking in VDMX6

If you have a LeapMotion or UltraLeap, and want to try the GECO application to share OSC data with VDMX. You can find it here: https://uwyn.com/geco/ We used the first generation LeapMotion controller in this tutorial.

For the iPad you can use Shoots Pro for sharing over NDI or Elgato Epoccam

Introduction to OCR & QR code capture in VDMX

VDMX 6.1 introduces a powerful new OCR (Optical Character Recognition) plugin that allows you to scan text and QR codes from live video input. These scanned results can be published as data-sources and used to trigger UI elements, update text layers, or automate actions in your VDMX setup.

Getting Started

To try it out, load the OCR Example template from the Templates menu. This preset is included with the latest version of VDMX and demonstrates how to use both OCR and QR scanning.

Before using the OCR plugin, make sure:

• You’re running VDMX version 6.1 or later

Quartz Composer is enabled under VDMX > Preferences > Rendering

What You Can Do

Live Text Scanning: Point your webcam at printed or handwritten text and see it appear in real time.

QR Code Scanning: Detect and display QR content directly into your project.

Data Routing: Use the UI Inspector to map scanned strings to text layers, pop-up menus, or other elements.

Clock Syncing: Trigger OCR or QR scans automatically on every beat using the Clock plugin.

This makes it easy to create interactive visuals using real-world inputs—great for installations, performances, or creative automation.


Trigger Media Clips in Real Time Using OCR and QR Codes in VDMX

By why stop there!? These outputs can be used to control UI elements like pop-up menus and, in turn, trigger clips in the Media Bin.

In this other tutorial, we’ll walk through how to set up a system that lets you hold up color-coded QR labels or text to control clip playback — ideal for powering interactive installations, printed cue cards, or playful VJ sets.

Getting Started

1. Open the “OCR Example” template from the Templates menu.

2. In the Workspace Inspector (Cmd+1), add a Control Surface plugin.

3. Create a Pop-Up Button and label its items (e.g., Red, Green, Blue).

4. In the UI Inspector (Cmd+2), set the pop-up to be controlled by the OCR text string:

• Navigation > Select by string

• Data-Source > OCR Text

Link OCR to Media Playback

Once your pop-up button is receiving the OCR string:

1. Go to the Media Bin Controls tab.

2. Set Trigger by Index and choose the pop-up button as the source.

3. When the pop-up changes value (based on OCR input), the corresponding clip will be triggered.

Try It Live

Switch the OCR video source to a FaceTime or external camera, then hold up QR codes or printed text. As the plugin reads values like “red,” “green,” or “blue,” it updates the pop-up and triggers the matching clip.

You can also sync scanning with the Clock plugin to automatically scan at regular intervals, creating hands-free interaction loops.

Tips & Tricks

Case matters – OCR text strings must match your pop-up labels exactly.

• You can also scan handwritten words, printed stickers, or even project QR codes from your VDMX interface using:

QR Code Generator Source (creates QR codes as layer sources)

QR Code Overlay FX (renders QR code overlays on top of any layer)


Try It Yourself

The OCR Example template is available in the latest build of VDMX. If you’re experimenting with it in your work, tag us—we’d love to see how you’re using this feature.

If you’re building an interactive installation or performance using OCR, we’d love to see it. Tag us or share your project with [@VIDVOX].

Using Color Transfer FX & Segmented Color Transfer

Using the New Color Transfer FX in VDMX6

The Color Transfer FX is one of the new additions in VDMX6, providing a powerful way to match the color and brightness levels of a video stream to a reference image or layer. Whether you’re blending visuals from different sources or aiming for a cohesive color palette across layers, this effect makes it easy to achieve polished results without LUTs or external tools.

Similar to using LUTs to stylize an image, the new Color Transfer FX in VDMX6 can be used to alter the colors of a layer by using any available video stream as a reference.

This video will walk through the basics of using both the Color Transfer and Segmented Color Transfer FX.

Quick Start

• Open the Simple Mixer template to get a two-layer setup.

• Load image or video assets into the media bin.

• Apply the Color Transfer FX to one layer, and select another as the reference source.

• Use the chroma and luma sliders to control how much color and brightness are shifted.

For macOS 14+ users:

• Try the Segmented Color Transfer FX to independently adjust foreground and background tones using separate references.

VDMX6 Color Transfer FX instantly adjusts color from one reference to another.

Color Transfer FX

Tips

• For a uniform look, you can apply this FX to your Main Canvas FX and use hidden layers to regulate your project’s colors.

• Apply other FX (like Color Controls) to the reference layer to influence the result.

• You can try stacking layers and blend modes to get a unique look.

Share Your Work

If you’re using the new Color Transfer or Segmented Color Transfer FX in your own projects or live performances, we’d love to see what you’re making. Tag us (@VIDVOX) or drop us a message to share your work.

Exploring the new Blur Faces and Face Overlay FX in VDMX

Welcome to this tutorial, where we dive into face-specific effects in VDMX powered by Apple’s Vision SDK and CoreML. We’ll explore how to blur faces, create face overlays, and experiment with pixelation to build dynamic, real-time visuals.


1. Blur Faces

Function: Automatically detects and blurs faces.

Customization:

Adjust blur intensity and radius.

Crossfade to isolate the face or invert the mask.

Example: Perfect for anonymizing faces or adding a dreamy, surreal aesthetic to your visuals.

2. Face Overlay

Function: Duplicates and stacks faces onto other layers.

Usage:

Combine with live input or pre-recorded footage.

Adjust size, position, and blend modes for unique results.

3. Pixelate Faces (Beta)

Function: Pixelates detected faces.

Note: This effect is experimental and may glitch with multiple faces.

Potential Use: Add a retro, 8-bit aesthetic or obscure identities in a stylized way.

4. Creative Stacking and Modular Effects

Layer Count: Add unlimited layers to compound effects.

Experiment: Stack effects, tweak settings, and discover unique combinations.

Wrap-Up

VDMX’s modular approach lets you craft complex visual experiences, perfect for performances, installations, or experimental art. If you create something cool, tag the team on social media (Instagram / YouTube) or share your work in the forums!

Happy experimenting! 🚀

Eurorack & Live Coding Guest Tutorial with Sarah GHP!

For this guest tutorial we are joined by Sarah GHP for a deep dive behind the scenes look at her setup connecting a variety of different video worlds using window capture, Syphon, and digital to analog conversion. You can also read more about her creative process, how she got into feedback loops, and more in the Interview with Sarah GHP! post on our blog.

Watch through the video here:

And follow along below for additional notes and photos of the rig and how everything fits together!

How and why to connect your VJ app output to an analog synth

In my practice — whether performing visuals live or creating footage for an edited video — I pull together a number of variously processed layers, which I want to overlay and manipulate improvisationally.

Some things computers are great for, like making complex graphics or applying effects that are more accessible digitally, in terms of both device footprint and complexity. For other aspects — tactile improvisation, working with signals from modular musicians, video timbre — an analog synth is the better choice.

My performance chain aims to make both accessible at the same time in one system.

Within this setup, VDMX plays a keystone role: adding effects, routing signals throughout the system, making previously recorded footage available for remixing, and even recording footage. It can also help fill space for modules that one has not yet been able to buy, which is the focus of another tutorial on this this site..

Here I walk through how my setup works as inspiration for one of your own.

List of gear

Computer & Software

I use an Apple M1 Macbook or sometimes an M2 Macbook. I livecode SVGs using La Habra, a Clojurescript + Electron framework I wrote, and sometimes use Signal Culture's applications, especially Framebuffer and Interstream. And of course, VDMX.

Video Signal Transformation

To transform the video signal from the HDMI that exits the laptop into an analog format accepted by the Eurorack setup, I use two BlackMagic boxes: HDMI to SDI and SDI to Analog, which can output composite (everything over one wire) or component (Y, Pb, Pr). Sometimes here and there I see a converter that will do HDMI to composite directly, but having two converters can be useful for flexibility. The biggest downside is that the two are powered separately, so I end up needing a six-plug strip.

It is also possible to skip all of these and point an analog camera at a monitor to get the video format you want, but in that case, you need a separate monitor.

Eurorack

This is my case on Modular Grid. The top row is the video row and the most important. In this example, I am focused on using the LZX TBC2 and LZX Memory Palace, plus the Batumi II as an LFO.

The LZX TBC2 can work as a mixer and a gradient generator, but in this setup it is mostly converting analog video signal to the 1V RGB standard used by LZX. It can be replaced with a Syntonie Entree. Likewise, the video manipulation modules can be replaced with any you specifically like to use.

Output, Monitoring & Recording

Finally, there is the output, monitor and the recorder. The monitor is a cheap back-up monitor for a car. (For example only.) Usually the power supply needs to be sourced separately and I recommend the Blackmagic versions, especially if you travel, because they are robust and come with interchangeable plugs.

When performing without recording, the main output can be sent through any inexpensive Composite to HDMI converter. The one I use was a gift that I think came from Amazon. Some venues used to accept composite or S-Video directly, but these days more and more projectors only take HDMI or are only wired for HDMI, even if technically the projector accepts other signals.

When recording, I format the signal back into SDI through a Blackmagic Analog to SDI converter and then send it to a Blackmagic HyperDeck Studio HD Mini. This records on one of two SD cards and can send out HDMI to a projector.

Getting the hardware set up

The purpose of the hardware setup is to convert video signals from one format to another. (More detail about how this works and various setups can be found in an earlier post I made.)

Don’t forget the cables!

The general flow here is computer > HDMI to SDI > SDI to Analog > TBC2 > Memory Palace > various outputs.

Setting up the software

Software flowchart

Those are the wires outside the computer. Inside the computer, there is a set of more implicit wires, all pulled together by VDMX.

My visuals begin with La Habra, which I live code in Clojurescript in Atom. (Even though it is dead as a project, Atom hasn't broken yet, and I wrote a number of custom code-expansion macros for La Habra, so I still use it.)

These are displayed as an Electron app.

The Electron app is the input to Layer 1 in VDMX.

In the most minimal setup, I add the Movie Recorder to capture the improvisation and I use the Fullscreen setup and Preview windows to monitor and control the output to the synth. I have the Movie Recorder set to save files to the media bin so that if I do not want to record the entire performance, I can also use the Movie Recorder to save elements from earlier in the set to be layered into the set later.

One perk of this setup, of course, is that I can apply VDMX effects to the visuals before they go into the synth or even in more minimal setups, directly into the projector.

Sometimes it is fun to use more extreme, overall effects like the VHS Glitch, Dilate, Displace, or Toon, to give a kind of texture that pure live-coded visuals cannot really provide. I used to struggle a bit with how adding these kinds of changes with just a few button clicks sat within live code as a practice, since it values creating live. But then I remembered that live code musicians use synth sounds and samples all the time, so I stopped worrying!

Beyond making things more fun with big effects, I use VDMX to coordinate input and output among Signal Culture applications, along with more practical effects that augment the capabilities of either the analog synth or another app.

So, for example, here Layer 1 takes in the raw La Habra visuals from Electron, pipes this out of Syphon into the Signal Culture Framebuffer, and then brings in the transformed visuals on Layer 3.

I also usually have the same La Habra visuals in Layer 5, so that if I apply effects to Layer 1 to pass into the effects change, Layer 5 can work as a bypass for clean live coded work should I want it. This same effect can be achieved with an external mixer, but using VDMX means one less box to carry. It also gives access to so many blend modes, including wipes, which are not available in cheaper mixers.

Use the UI Inspector to assign keyboard or MIDI shortcuts to the Hide / Show button for each layer.

I pair the number keys with layer Show/Hide buttons to make it easy to toggle the view when I am playing.

In this setup, I am more likely to use effects that combine well with systems that work on luma keying, like the Motion Mask, or use VDMX to add in more planar motion with the Side Scroller and Flip. Very noisy effects, such as the VHS Glitch, are also quite enjoyable when passed into other applications because they usually cause things to misbehave in interesting ways, but even a simple delay combined with layers and weird blend modes can augment a base animation.

At this point, astute readers may wonder, why make feedback using a VDMX feedback effect, a Signal culture app, multiple VDMX layers plus a delay, AND an analog synth like the Memory Palace? The answer is simple: each kind of feedback looks, different, feels different, and reacts differently. By layering and contrasting feedback types, I feel like we are able to see the grains of various machines in relationship to one another, and for me that is endlessly interesting. (Sometimes I bring in short films from other synths that cannot be part of the live setup as well, and that is usually what goes in Layers 2 and 4.)

Layers 2 & 4 in VDMX

Where and how effects are applied of course also affects how they can be tweaked. When I define effects in VDMX that benefit from a changing signal, especially the Side Scroller and Flip, I use the inbuilt LFO setup. I have one slow and one fast one usually and define a few custom waveforms to use in addition to sine and cosine waves.

Final setup in VDMX

The choice between computer-generated signal and analog signal is mostly decided by where the effect I am modulating lives. When it comes to effects that are available both on the synth and in the computer, the biggest difference is waveforms from the synthesizer they are easier to modulate with other signals, but harder to make precise than computer-based signals.

Setting up the synth

Now that we have set up the software to layer live computer based images and all the converter boxes to get that video into the Eurorack, the last step is setting up that case.

Synth flowchart

Mostly I work with the LZX Memory Palace, which is a frame store and effects module. It can do quite a lot: It has two primary modes, one based around feedback and one based around writing to a paint buffer, and can work with an external signal, internal masks, or a combination of the both. In this case, I am working with external signal in feedback mode.

To get signal into the Memory Palace, it needs to be converted from the composite signal coming out of the Blackmagic SDI to Analog box into 1V RGB signals. For this, I use the LZX TBC2. It also works as a mixer and a gradient generator, but here I use it to convert signals. On the back, it distributes sync to the Memory Palace.

Memory Palace + Batumi

And this is where the last bit of the performance magic happens. The Memory Palace offers color adjustment functions, spatial adjustment functions, and the feedback control functions, including thresholds for which brightnesses are keyed out and what is represented in the feedback, as well the number of frames repeated in the feedback and key softness. To dynamically change these values, LZX provides inbuilt functions; so for instance the button at the bottom of the Y-axis shift button triggers a Y scroll, and then the slider controls the speed of the scroll. However, the shape of the wave is unchangeable.

That is where the CV inputs above come in. Here I have waves from the Batumi patched into the X position, and I can use the attenuator knobs above to let the signal through.

Once everything is humming away, the Memory Palace output needs to go into a monitor and whatever the main output is. In theory, the two composite outputs on the front of the Memory Palace can be used, but one is loose, so I use one and then use the RGB 1V outputs into the the Syntonie VU007B (A splitter cable would also work or a mult, but I already had the VU007B.)

One output goes into the monitor, a cheap back up camera monitor. The other goes into the projector directly or into a Blackmagic Analog to SDI box and then into the Hyperdeck for recording, before being passed via HDMI to the projector.

While I use one big feedback module, LZX and Syntonie, as well as some smaller producers, make video modules that are smaller and do fewer things alone. These tend to be signal generators and signal combinators and, following the software to synth section of this tutorial, you can use any of them.

What It All Looks Like Together

Now that we've connected everything up, let's see it what it looks like performed live!


Enjoyed this guest tutorial from Sarah GHP? Next up you can check out the Interview with Sarah GHP! post on our blog to see even more of her work!

Hiding the orange privacy dot on external displays (Official Apple method)

Set the Privacy Indicators toggle to off to hide the orange / green dot on external displays.

Good news everyone!

As almost every visual artist on using macOS knows, in recent years Apple has added an orange / green dot that appears in the menubar whenever an application is using the microphone and / or camera for capture. While this privacy feature is generally a fantastically useful tool for people to track which apps may be recording sound and video, it was extremely annoying for anyone trying to perform live visuals. Although there have been several workaround published since then, none of them have been officially from Apple and always came with some additional security risks that were not worth the trouble.

Fortunately as of macOS 14.4 there is now a method provided by Apple for hiding the privacy dot on external displays! The instructions are fairly straightforward and can be found in this Apple support note: https://support.apple.com/en-gb/118449

After you’ve rebooted in Recovery Mode and entered the ‘system-override suppress-sw-camera-indication-on-external-displays=on’ command in the Terminal, the new option available in System Settings under Privacy & Security for the microphone / camera will allow you to turn the privacy dot on and off for external displays on demand. This makes it possible to quickly re-enable or disable the privacy feature temporarily as needed.

Note that this will only remove the privacy dot on external displays - this technique will not work on your main monitor.

Using VDMX as a Step Sequencer and LFO for Euroracks

One of the most fun aspect of using Eurorack setups is the ability to quickly reroute control data and sound between different modules. Conversely one of the most limiting parts of using Eurorack setups is the ability to quickly swap out different modules from your rack to get different kinds of control data and sound coming and going from your system. In this tutorial we will look at how the Step Sequencer and LFO plugins in VDMX can be used alongside Eurorack setups to provide a versatile approach to generating CV values.

As Eurorack modules are also often a significant investment of money, it can also sometimes be useful to use software tools like VDMX to simulate their abilities to determine if they are a good fit for your needs before purchasing.

Overivew

This tutorial is broken into three main parts:

  1. Setting up our Eurorack to convert MIDI to CV.

  2. Setting up VDMX to send MIDI to the Eurorack.

  3. Configuring step sequencer and LFOs in VDMX to control parameters on our Eurorack.



Setting Up A Eurorack To Receive MIDI to CV

Univer Iter MIDI to CV and Tiptop Audio Buchla 258t Eurorack modules.

For this initial demonstration of doing MIDI to CV we are using the Noise Engineering Univer Inter along with a Buchla & Tiptop Audio 258t Dual Oscillator module to generate tones.

The Univer Iter has 8 CV out ports along with a USB port which can be directly connected to a computer for receiving incoming MIDI. Within applications like AudioMIDI Setup and VDMX it appears as a standard MIDI output device option. It also can be configured to use a custom MIDI mapping as needed and can be daisy chained with a second module for another 8x outputs.

A variety of different modules are available for taking MIDI data in one form or another and converting it to CV. As always with Eurorack setups it is prudent to spend some time looking at all of the module options and picking the best for your specific needs.


Setting Up VDMX To Send MIDI Output

Most user interface controls in VDMX such as sliders and buttons can be configured to directly send their current value as MIDI output using the “Send” tab of the “UI Inspector” window. When configuring VDMX to drive external devices such as a Eurorack it is often useful to add a “Control Surface” plugin with customized set of UI elements that represent each of our individual CV outputs.

Steps:

  1. Use the “Plugins” tab of the “Workspace Inspector” to add a “Control Surface“ plugin to the project.

  2. Use the sub-inspector to add one or more UI elements (sliders, buttons, pop-up menus, etc). to the control surface interface.

  3. Click on each UI element in the Control Surface main window to inspect it. Use the “Send“ tab of the “UI Inspector” to configure the MIDI mapping and output device.


Configuring Step Sequencer and LFOs in VDMX To Control Eurorack Parameters

Now that our Eurorack is receiving MIDI from VDMX and converting it to CV we can begin to set up our Step Sequencer and LFO plugins to drive individual parameters of our synthesizer.

A VDMX setup with a two track step sequencer, an LFO, a clock plugin, and a control surface configured to send MIDI output.

Steps:

Right-click on sliders and buttons to assign data sources.

  1. Use the “Plugins” tab of the “Workspace Inspector” to add a “Step Sequencer“ plugin and an “LFO” plugin to the project.

  2. Use the sub-inspector to customize Step Sequencer / LFO configurations as needed.

  3. Right click on output UI elements in the Control Surface or use the UI Inspector to route generated control data to our MIDI outputs.

  4. Patch the MIDI module CV output to synthesizer input parameters.

  5. Use the “Clock” plugin to adjust the overall BPM.

Once we’ve created our parameter routings on the Eurorack we can also optionally further customize our Control Surface with appropriate labels and display ranges, or continue to leave them as generic 0-1 values that are commonly re-patched on the fly.


Mastering Projector Rigging: Elevate Your Visual Installations with Pro Techniques

So, you've got a projector? Now, let's take your visual installations to the next level! While there's no shortage of inexpensive projector mounts online, some fall short for custom setups and quick turnarounds. Enter GRIP hardware – a tried-and-true solution from the film world that's been revolutionized with 3D printing.

Mount Anywhere! Don’t forget the safety cable!

Image from ‘supercell by slowdanger’ taken at the Flea theather in New York City. January 2024.

In the film/video realm, GRIP hardware is the unsung hero of lighting and rigging, trusted on film sets, TV shows, and in theaters. This gear, designed to support hefty lighting rigs for extended periods, can be the perfect match for your projector mounting needs.

Lots of metal bits for any type of installation. It only gets strange when explaining to TSA!

Adaptable and Accessible: The ⅝" Baby Pin Connection

3” Baby Pin Wall plate attached with a custom made 3D printed mount for Optoma Projectors that works with Impact and Manfrotto plates.

At the core of this hardware is the ⅝" Baby Pin connector – a versatile link that works wonders for small to medium-sized projectors. If you're dealing with a large event projector, chances are it comes with its own secure cage or mounting structure for a safer installation.

The black coupler is a “Double Female Adapter” made by Kupo. The rest is a mix of Manfrotto Avenger and Impact GRIP hardware.

This type of connection opens the door to hundreds, if not thousands, of mounting possibilities. Trusted manufacturers include Impact Lighting (budget-friendly), Manfrotto Avenger Series, Matthews Lighting, and Kupo Lighting.

Left: Projector + wood mount with super clamp on a swivel head. Right: Projector with magic arm + wood mount and receiver plate.

The Rigging Essentials: Cardellini Clamps, Jaw Vice Clamps, and More

The Cardellini Clamp, affectionately known as a Mathelini in theatrical circles (or alternatively as a Jaw Vice Clamp online), offers an impressive 6"+ range, making it an indispensable tool for securing projectors in diverse scenarios. When paired with an Impact Baby Pin Swivel Head Mount, complete with a sleek black Kupo connector, and anchored by a Manfrotto Avenger ⅝" baby pin receiver, this setup guarantees both stability and flexibility.

While different brands may come with varying price tags, their performance is generally comparable. However, it's worth noting that complications can arise when mixing brands, as they may have slightly different pin "lock" heights (the indentation at the top of the baby pin). Although it's possible to mix and match, optimal connectivity is often achieved when sticking to the same brand for all components in your hardware ensemble. (Link to 2” vice clamp)

This hardware is strong!

Seriously, this Manfrotto Super Clamp survived years outside and even made it through a hurricane! (video) (It was only holding a camera, but still).

Once tightened in place, these components stand firm, even under the weight of heavy projectors. However, for added security, never forget the importance of a safety cable – a 1/16" aircraft cable that ensures your setup stays put, and it is relatively inexpensive to make your own safety cables after purchasing steel cable cutters and a swaging tool.

Expanding Your Toolkit: Accessories for Seamless Installations

Beyond the essentials, assembling a well-rounded toolkit is paramount for a flawless installation. Consider expanding your arsenal with beam clamps, spring clamps, additional IEC cables, HDMI over Ethernet adapters, and HDMI cables under 50'. It's crucial to remain mindful of HDMI cable limitations – once you exceed 50 feet, exploring signal boosters or HDMI over Ethernet solutions becomes imperative.

Here's a pro-tip: I highly recommend using an IEC cable tap for added convenience. This allows you to power a media player ( this one offers seemless loop with .mkv files, must used a hidden file cleaner like BlueHarvest to remove hidden “.trash” files from USB drive or SD card before looping a folder of files). , Raspberry Pi, or HDMI over RJ45 adapter with a single cord, streamlining your setup.

Speaking of HDMI over RJ45 adapters (not HDMI over Ethernet!), I found mine for less than $20 USD, featuring both HDMI input and output on the transmitter (TX). While it seems they're currently sold out, there's no need to break the bank; spending $50 or more on this type of adapter is unnecessary. Instead, consider investing in a quality shielded CAT6 cable or making one yourself. A shielded cable helps minimize noise, making it especially beneficial for longer runs, particularly when running cables alongside power cables. This cost-effective approach ensures optimal performance without compromising your budget. (Note: HDMI over Ethernet means you could send the signal over a network, switch, router, etc. HDMI over RJ45 or HDMI over Cat5e requires a “homerun” cable that runs direct from the transmitter (TX) to receiver (RX). The protocol that is used by the manufacturers may be different then others, so you can’t mix and match these RX and TX receivers with different brands.)

Beam Clamps, pipe clamp, super clamps, baby pin adapters, yoke mount, grip head, adjustable magic arms, and a swivel head mount baby pin plate enhance adaptability, offering creative solutions for various mounting scenarios.

Conclusion: Your Projector Rigging Journey

As you embark on your projector rigging journey, the right accessories make all the difference. This comprehensive guide ensures you're well-equipped for any installation, whether it's for escape rooms, VJing, projection mapping, or visual effects. Elevate your visual installations with the perfect blend of industry-proven hardware and cutting-edge solutions – because your projector deserves nothing less!

This article was written by ProjectileObjects.  You can learn more about them at http://projectileobjects.com/ or follow them on Instagram @ProjectileObjects 

Selecting the Ideal MIDI Controller for Visual Performances

If you find yourself asking, "Which MIDI controller suits me best?" or if you're in search of a new addition to your existing controller lineup, you're in the right place. In this article, we'll explore various MIDI controller options, weigh their pros and cons, and provide insights to help you make an informed decision on the perfect MIDI controller for your visual performances.

Read More

Hercules P32 DJ VDMX Template


To keep up with the times, we are releasing another VDMX template for the Hercules P32 DJ MIDI controller.

With it’s 32 pads, 19 knobs and three sliders, the P32 DJ is a jam packed MIDI controller for its size. A littler smaller than a 16” MacBook Pro, the P32 has soft pads with a higher quality feel than some entry level MIDI controllers.

For this template, the native 2 channel DJ style layout, is perfect for a VDMX 2-channel video mixer layout.

Additionally, the P32 DJ has a built in audio interface that supports stereo RCA out, and 1/4” TRS port for headphone monitoring, allowing you to DJ and VJ from the same device.

To install templates in VDMX go to: ‘Your Drive’ > Users > ‘username’ > Library > Application Support > VDMX > templates

Download here: Template File

We’ve included templates, project files, and reference images. You will need an active VDMX license to open the project files.

More about the template:

This is a template for the Hercules P32 DJ MIDI controller.

This layout functions as a two channel video mixer.

The 8 x 8 soft pads as linked to the media bin for each layer, Left and Right.

For this template to work, make sure the PADs are set to SAMPLER (not Slicer, Loop or Hot Cue). With defaul midi mappings for this controller, and the pads set to Sampler, you will be able to use all functionality of this template.

From here you can add on, and use the Slicer, Loop, and Hoe Cue, as well as the shift key functions, to make a more robust layout for your own VDMX projects.

The layout is split down the center of the controller, left functions similar to right.  You can add additional presets and pages as you desire.

Other common buttons:

- Shift = Fast cut between layers

- Sync = Fade between layers

- Cue starts the track over.

- Pause/Play (pauses and plays the track.

- Cross fader, fades between videos

- Left and Right vertical sliders fade opacity and audio.

- Headphone button, mutes track audio.

- Layer FX are enabled by button under rotary encoder, then each encoder adjusts a parameter within that FX

- Top left and right corner of the controller, the Loop/Tempo, Active/Reset buttons scroll through the media bin pages.  Pressing down on this endless rotary encoder will trigger a random video from the media bin.

- To the left and right, the Filter/Move endless rotary encoder will scroll through FX presets for each layer. Be aware, that this will reset the FX each time you move to the next preset.  Pressing down on this knob will jump to an empty FX off preset.

- Record button starts recording a video of the master output.

- Slip button captures an image of the master output.

-Load A and B eject the media on each side.

- Browse/Main endless rotary encoder switches between main output FX presets.  Pressing down resets to an empty preset.

-High, Mid, Low rotary encoders are currently not mapped to any MIDI controls, but could be mapped to main output FX or an action of your choice.

Akai APC40 MK II 2-Channel VJ Mixer template for VDMX

Templates are a great way to get started with VDMX and with this template you can take an out of the box APC40 MKII and jump right in!

VDMX APC40 MK II Layout Template

A few things to note about the APC40 MK II before we get started.

The APC40 MK II has three internal MIDI mapping modes.

  • Generic Mode (Default)

  • Ableton Live Mode

  • Alternate Ableton Live Mode

To use this template correctly, you’ll need your APC40 MK II to be set to the default “stock” Generic Mode. More information about these modes can be found here (PDF) Bottom of Page 10.

APC40_MK_Neil_LayerChange.png

When you first turn on the controller, it will default to the correct button mapping. To reset the template to all defaults, it is recommended that you hit this button when you start the template to eject all clips and set everything to its default.

This button ejects all media, clears all the FX and syncs the LFO view to the LFO slider. (Warning: You’ll lose FX in Layer A and B if you don’t save them as a new FX chain).

This button ejects all media, clears all the FX and syncs the LFO view to the LFO slider. (Warning: You’ll lose FX in Layer A and B if you don’t save them as a new FX chain).

Not all buttons are RGB. When clips are ready to be triggered in your media bin, the 40 RGB button grid will light up blue, then yellow when the clip is selected. You can customize these colors yourself in the media bin options:

Screen Shot 2021-09-23 at 10.28.39 AM.png
Image found on page 10, Akai communication protocol manual.

Image found on page 10, Akai communication protocol manual.

There are two versions of this template. A blank version without FX and a starter version with one layer of FX presets.

Default setup.

This template is structured to be a 2-channel video mixer. Both video layer A and B flow to a Master output (Projector, TV, etc.) The cross fader blends between both layers and each layer has its own FX chain presets.

The Master output FX are turned on and off by the top 8 rotary knobs. The first vertical slider on the right side of the controller labeled “MASTER” controls the master opacity. If it is all the way down, your screen output will be black. You can change this later to preference or disable it entirely.

Selecting clips for both layers A and B:

Both layers use the same 40 RGB button grid to trigger clips. To switch between Layer A and B, when selecting clips — use the first two buttons on the top right side of the grid under the label “SCENE LAUNCH” They will light up when they are selected. Top goes sets destination for Layer A, bottom for Layer B. The two buttons beneath that (Green) are page up / page down buttons for moving through your media bin. They are also linked to your Audio Analysis Filter 3 and will flicker based on your computers mic peaking. Beneath that (Yellow) is a random clip trigger.

To trigger to the next clip in the media or move up and down the media bin, redirect your eyes to the “BANK SELECT” 4 button arrow keys.

The rest of the buttons should be self explanatory based on the image above, or you can read through the “User Notes” built into the template which explains all of this and more.


Template Tip!

If you’re adding new FX to your A and B layer FX chains, make sure to save them as a preset by clicking the + in the top of the FX window. This will save your FX chain and you can assign it to a new FX preset button. You can always disable the FX layers MIDI triggers in your project until you build out the template more to your liking!


Here’s a brief overview video of this template: