360° video is making a transition from trendy to widespread. Eventually, this immersive technology may become an industry standard. Our tutorials exist to help you learn the tricks and trades on how to create your own. In our first blog post of the series How to Make a 360 Degree/VR Video: Part 1, we discussed the pre-planning and production aspects of creating our 360° video A Big Encounter. This post will specifically detail the stitching/post-production of the creation process.
The first thing to keep in mind in your post-production workflow is how to work with such a large amount of data from all of the GoPro footage. For each take, we had 6 different video files (because our rig was a 6-camera rig). We didn't want to import all the files using AutoPano Giga because you can't live view in the software. We organized files in a folder hierarchy as follows: MOVIES>SCENE(1-2)>TAKE(1-6). After the import we renamed the default GoPro Titles with Cam1 to Cam6.
With AutoPano Video, the drop in process is pretty great. You can take all 6 of the files labeled Cam 1-6 and drop them into the Input Video dialogue as soon as you open the program. Then go through the process starting with Synchro and ending with 'Render'. We didn't have a moving monopod nor was our dialogue distant from our GoPro rig so without worrying about where on the timeline the playhead was, we click 'Use Audio for Synchronization' and it worked every time. After synchronization, it says "Accurate Synchronization Found," click Apply.
Next up is the stitch. I got into this believing if Google and GoPro were behind a technology, the stitching software available was going to be relatively simple, and it gives that illusion because all you do is click a position in your video, select the camera you were using and follow it up by clicking 'Stitch'. This will get you a Realtime Preview of all 6 videos together. Remember that this is a rough stitch based on bringing all of these overlapping frames together, it's not near flawless. Consider this during your shoot, you really want to make sure that your characters and interactions aren't ending in between stitch lines. This took place in both of our scenes, but was extremely pronounced when it came to Scene 2.
One great thing about the Realtime preview is that AutoPano Video allows you to get the horizon in line so you can get a pretty good perspective of how the shot of your whole scene is going to look. This perspective is filleted like a fish, therefore doesn't isn't truly representation of how your footage will look in a VR headset, but it's beneficial to see the whole scene while in the process of editing. When you get the horizon line in check click Apply and click on another frame to seek out the continuity. Word to the wise: the stitching process is infinitely easier if your horizon line is relatively flat. When your horizon is close to what you're wishing audience to see, Click Apply. Your horizon line can be further finessed in AutoPanoGiga.
While still in AutoPano Video, click Edit on the bottom right of your "Realtime preview" and you'll get to see the frame you're working on in AutoPano Giga, where you'll be able to get a better handle on the stitch.
When you get into APG, click Edit on the top left of your image, and then you can use the Picture icon on top to work with the Pano or Picture. Pano, helps you figure out your horizon and where you want all of your cameras, and picture allows you to move each specific camera to align the images. If you zoom in on 'Picture' you can actually visually match overlapping layers of video, which is a lot easier for me then working with control points that are part of Kolor's algorithm. You're essentially changing where the camera interacts with the other one on each frame. I assumed that if I got this right on one frame it would carry through to all the other frames, unfortunately, as your scene has moving parts, you'll have to readjust frame by frame where your cameras need to overlap.
While you're working on this, you can click save and go back into AutoPano Video and see how you're doing and go through the rest of your timeline. When you find a frame you need to re-edit, click the blade on the lower right of the screen and slice the part of your timeline that will change.
The second editing tool we used in AutoPano Giga is the 'Markers for Keeping Objects'. This is a tool that I think could use more refining from the folks at Kolor. Currently you place markers on objects you want to keep in the frame and the software automates where the cameras stitch together. We had filmed some instances where these intersections were our presence points so I would have rather manually masked where I wanted the camera rather than using the software markers. It took a lot of Resets to get this right.
Following up the edits, we used the limited coloring tools in Kolor's AutoPanoVideo. Click the color icon on the top, check all the check boxes and hope the software is looking at what you are. Be sure to use the Exposure Compensation at the bottom if you have some over and under exposed areas.
After this process is complete, we went in for the Render, the following settings are optimized for YouTube360:
Width: 3780 Height: 1890
Output Type: mp4
Preset H.264 4k UHDTV
FPS: PAL 25
Video Bitrate: 24000
Audio Bitrate: 160
Aspect Ratio: Keep in Pixel Size
Audio Source: Default
I took this file into FCPX to get some more color correcting in and to get the final export. I dropped the file into the timeline so I could use coloring and Audio in fcpx. In Final Cut, you just need to follow your linear editing rules, and treat this as any other file. Unfortunately, most of the VR world is using premiere so we don't have as many plugins and software available that knows exactly what you're working on. When I finished in FCPX, I exported in Apple Devices formats in H.264 and then changed the format name from .m4v to .mp4.
At this point I popped the video into a free software that Kolor has for viewing called Kolor Eyes. Kolor Eyes allows you to watch your video in the 360° perspective without having to upload it onto YouTube.
If you're happy with what you've done, pop it into 360VideoMetadata Tool 2 which is a free software you can get from Google to get your video YouTube360° ready.
- Click Open
- Find your file
- Click Spherical
- Click Save As:
- Name your file
Now get ready to upload. The great thing is that the metadata necessary to get your file live on YouTube360° is injected, so you don't have to do anything special to upload a 360° video. After your video goes live, it takes about an hour for it to process on YouTube360°.
Back in the day when I was learning the linear edit process for films, there was a wealth of information, techniques and tutorials out there. This isn't the case when it comes to 360° video, and that led to a lot of confusion, trouble-shooting and experimentation. At the end of the day, it's super satisfying to create something in an era if a bunch of unknowns; it's a great feeling knowing that what we're experiencing is the early age of the ultimate new medium.
Please feel free to tweet at me or comment below with your best practices or questions.