Halfway the entire screen would always be grey no matter the source, so that probably makes it a bit less interesting (and more strobe-like)
There's probably some things that could be done in different color representations, like shifting hue in HSV for example, but it would probably not feel as beat-related (probably similar to colorize now, but with more colors showing at the same time)
Inverting brightness could be an option, but the hue in dark regions is often noisy/unknown, so that probably ends up with a lot of colored noise.
I suppose either way shadertoy can be a good start to experiment with these.
There's probably some things that could be done in different color representations, like shifting hue in HSV for example, but it would probably not feel as beat-related (probably similar to colorize now, but with more colors showing at the same time)
Inverting brightness could be an option, but the hue in dark regions is often noisy/unknown, so that probably ends up with a lot of colored noise.
I suppose either way shadertoy can be a good start to experiment with these.
Posté Sun 09 Jun 24 @ 11:11 am
I suppose it will, do shaders now interact with the image drawn behind them?
I don't know the right terms but can they take a live feed as an input, I don't think they did before.
oh and I have a thing that uses colorize in a beat kind of way, on wait off shift colour wait on
I don't know the right terms but can they take a live feed as an input, I don't think they did before.
oh and I have a thing that uses colorize in a beat kind of way, on wait off shift colour wait on
Posté Sun 09 Jun 24 @ 11:58 am
I would love to see either one of these options listed below.
Option A: Add a 'Window Capture' option in the single source video option. I would like to use a 3rd party visualizer like Luminant vs. the shaders that are available when a song being played is audio only.
-OR-
Option B: The ability to send track info like Title, artist, etc. but ALSO include file type to a websocket server. Using Streamer.bot I have the ability to setup a websocket server and can change the source in OBS based on a script that checks the file type (mp3, mp4) or a boolean such as (has video = true) and I could switch the video source in OBS from the video output window of VDJ to the output window of Luminant or any other visualizer.
Option A: Add a 'Window Capture' option in the single source video option. I would like to use a 3rd party visualizer like Luminant vs. the shaders that are available when a song being played is audio only.
-OR-
Option B: The ability to send track info like Title, artist, etc. but ALSO include file type to a websocket server. Using Streamer.bot I have the ability to setup a websocket server and can change the source in OBS based on a script that checks the file type (mp3, mp4) or a boolean such as (has video = true) and I could switch the video source in OBS from the video output window of VDJ to the output window of Luminant or any other visualizer.
Posté Mon 15 Jul 24 @ 7:26 am
I could make a long list of what I miss on the Videoside of 64 version, Outline, VideoBars, Negative are a few that I used alot before :(
Posté Tue 16 Jul 24 @ 12:59 pm
And then it turns to silence, haha :)
Posté Sun 28 Jul 24 @ 4:44 pm
This is a random idea I just had, if you are broadcasting or recording video with a webcam…
Currently you can switch cameras on a time basis. I’m not sure if it’s possible, but I imagine it would be simple enough to switch between video output and webcam based on time too, but that’s not the feature request…
What I’m imagining is using AI computer vision to auto switch cameras or perhaps activate a particular camera, if, and only if, I am looking directly at the camera and holding a microphone near my mouth.
Then, when I look away and move the mic away from my mouth, the camera deactivates or switches back to another view…A bit of a gimmick, but it would be kind of cool.
Maybe there could be physical gestures for other controls? When I put my hands up in the air, trigger the air horn sample, perhaps?!
Currently you can switch cameras on a time basis. I’m not sure if it’s possible, but I imagine it would be simple enough to switch between video output and webcam based on time too, but that’s not the feature request…
What I’m imagining is using AI computer vision to auto switch cameras or perhaps activate a particular camera, if, and only if, I am looking directly at the camera and holding a microphone near my mouth.
Then, when I look away and move the mic away from my mouth, the camera deactivates or switches back to another view…A bit of a gimmick, but it would be kind of cool.
Maybe there could be physical gestures for other controls? When I put my hands up in the air, trigger the air horn sample, perhaps?!
Posté Mon 29 Jul 24 @ 8:03 pm
Something slightly more mundane, but probably useful than the last one...
I have started taking a webcam to my gigs so I can record the dance floor reaction to songs, posting clips to social media, etc.
I'd like the ability to split recordings (video in my case) by time. For example, every hour. Recording my 5 hour gig at 1080p using H265 codec, my whole system started to slow down after approx 2.5 hours and I got short gaps in the recorded audio. I stopped recording briefly, changed to H264 and all was fine for the rest of the gig.
For the first 2 hours, the H265 recording was perfect, so I suspect that if the recorded file was seamlessly split into 1-hour chunks, I wouldn't have had an issue.
I have started taking a webcam to my gigs so I can record the dance floor reaction to songs, posting clips to social media, etc.
I'd like the ability to split recordings (video in my case) by time. For example, every hour. Recording my 5 hour gig at 1080p using H265 codec, my whole system started to slow down after approx 2.5 hours and I got short gaps in the recorded audio. I stopped recording briefly, changed to H264 and all was fine for the rest of the gig.
For the first 2 hours, the H265 recording was perfect, so I suspect that if the recorded file was seamlessly split into 1-hour chunks, I wouldn't have had an issue.
Posté Tue 30 Jul 24 @ 1:16 pm
I like to see multiple video outputs (from multiple players) That I can send to seperate external screens or syphon/NDI outputs and mix them further down the line
Posté Thu 10 Oct 24 @ 11:13 pm
My wish for Version 2025:
Update or sync playlists on CDJ USB Sticks.
Then I can get rid of Rekordbox completely.
Update or sync playlists on CDJ USB Sticks.
Then I can get rid of Rekordbox completely.
Posté Fri 11 Oct 24 @ 5:12 am
I would like to ask for the played videos to stop at the last frame without setting any POIs
Posté Wed 05 Mar 25 @ 2:01 pm
+1 for pausing on final frame.
Posté Fri 07 Mar 25 @ 12:25 am
I figured out that I can use my iphone and Camo studio to output to the "Camera" source. Would love to see some sort of twitch or tiktok stream integration for this.
Posté Fri 30 May 25 @ 12:13 am
Also, maybe offer a 9:16 aspect ratio for video recording DJ sets instead of the usual 16.9
Posté Tue 17 Jun 25 @ 7:01 pm
Hi team,
As a daily user of VirtualDJ in video "playout mode", I’d love to suggest a feature that could make life significantly easier for those of us programming long-format video sessions in venues.
Currently, the AUTOMIX and SIDELIST panes offer helpful ways to manage upcoming content. However, for 12-hour sessions—common in venue-based setups—it’s the final two hours (typically 12am to 2am) that are the most critical. These are often the busiest, most valuable hours for venues, and they require extra attention to programming.
The challenge is that throughout the day, every added video floats forward or back in the queue depending on what comes before it. This makes it difficult to prepare and lock in a precise 2-5 hour closing segment ahead of time.
Suggested Feature: A new browser pane called "Playout"
This would be a third option alongside AUTOMIX and SIDELIST.
It would include 5-minute time markerson the right side of the interface to help visually plan the timeline video by video locking important stages in place, leaving us only to fill in the gaps.
Once a video is slotted at, say, 01:55 for closing, you could right-click and select “Anchor”. This would lock that video in place at its designated time.
From there, you could work backward through the timeline, building a structured sequence toward the earlier evening—allowing users to finalise their night’s programming earlier in the day (and be home by 3pm...if if already at home...in the pool).
This would be a game-changer for those of us who use VirtualDJ not as a live mixing tool, but as a curated playout system for venues that rely heavily on strong, consistent visual and musical programming.
It would even threaten other expensive dedicated software on the market.
Thank you for continuing to evolve VirtualDJ. It’s already a powerful tool, undoubtedly the best and this feature would make it even more useful for professional video users.
Many thanks,
M
As a daily user of VirtualDJ in video "playout mode", I’d love to suggest a feature that could make life significantly easier for those of us programming long-format video sessions in venues.
Currently, the AUTOMIX and SIDELIST panes offer helpful ways to manage upcoming content. However, for 12-hour sessions—common in venue-based setups—it’s the final two hours (typically 12am to 2am) that are the most critical. These are often the busiest, most valuable hours for venues, and they require extra attention to programming.
The challenge is that throughout the day, every added video floats forward or back in the queue depending on what comes before it. This makes it difficult to prepare and lock in a precise 2-5 hour closing segment ahead of time.
Suggested Feature: A new browser pane called "Playout"
This would be a third option alongside AUTOMIX and SIDELIST.
It would include 5-minute time markerson the right side of the interface to help visually plan the timeline video by video locking important stages in place, leaving us only to fill in the gaps.
Once a video is slotted at, say, 01:55 for closing, you could right-click and select “Anchor”. This would lock that video in place at its designated time.
From there, you could work backward through the timeline, building a structured sequence toward the earlier evening—allowing users to finalise their night’s programming earlier in the day (and be home by 3pm...if if already at home...in the pool).
This would be a game-changer for those of us who use VirtualDJ not as a live mixing tool, but as a curated playout system for venues that rely heavily on strong, consistent visual and musical programming.
It would even threaten other expensive dedicated software on the market.
Thank you for continuing to evolve VirtualDJ. It’s already a powerful tool, undoubtedly the best and this feature would make it even more useful for professional video users.
Many thanks,
M
Posté 2 days ago @ 5:27 pm