Before I dive down the rabbit hole of delay, could I quickly check my logic with people who know?
I’m thinking I have to tune down to my slowest source - I have 3 cameras and 4 mics
1. Sony Z280 carrying audio channels 1 and 2 (rear of room on 8m SDI to wirecast)
2. GH5s carrying audio channel 3 (rear of room, 5m HDMI to wirecast)
3. GH5 - carrying channel 4 (front of room, Bolt 500 to receiver at wirecast)
I set up an ipad stopwatch with all 3 cameras looking at it and their output on Wirecast in the foreground (took some arranging!) and took a picture, which showed:
1. Sony hits 270 ms behind ‘now’
2. GH5s hits 350 ms behind ‘now’ - 80ms or 2 frames later
3. GH5 hits 380 ms behind now - 30ms or almost 1 frame behind the GH5s and 3 behind the main audio on the Sony.
that 110ms is worth best part of 3 frames to me at 25fps - so I cant currently run audio from the Sony and GH5s together, as you hear some phasing. That’s what i need to fix. camera 3 (audio channel 4) needs to come into line too as it sees lips from the front of room and i cut to it while on the Sony audio
so I’m thinking this is my plan:
1. delay Sony video and audio by 110ms - which will then match the slowest GH5
2. Delay the GH5s video and audio by 30ms - again matching the GH5
3. When using an external mixer, delay the audio by up to 380ms (I think there is less than 1ms delay in the Sony UTX range)
4. Don’t touch the slow GH5
Happy days? Or am I missing something? I will, of course, be testing it and learning!
Also, is there a more accurate way to determine the differences in in the speed of the cameras than an ipad stopwatch?