Reality Capture Alignment Settings, Tips, & Fixes

A few things that might help your data align

Azad Balabanian
8 min readJun 6, 2021

Intro

Images alignment is fundamentally the most important step to get right in a Photogrammetry scan, as that’s what will determine the quality of all subsequent processes. In Photogrammetry processing software can be a bit tricky sometimes, a bit indeterministic even.

Here are a few tips that I’ve come across over the years of working with Reality Capture in getting image datasets to properly align.

Image Raw processing

If you captured the dataset in its RAW format (available with DSLR’s/drones), you should take advantage of the ability to adjust processing settings to reveal details in the images that were not visible.

By doing so, you might reveal some details and features that previously did not have enough contrast with their surrounding pixels to be considered a feature.

Using a RAW photo processing software (Lightroom, DxO Photolab, etc), process the entire dataset with these settings:

Typically, I don’t reduce the highlights by -100, but this photo was a bit overexposed
  • Reduce Highlight value
  • Increase Shadow value
  • Adjust Exposure if necessary

Export the full image dataset with the same adjustments into a folder called “_geometry” which you’ll use for the Alignment and Reconstruction steps in Reality Capture.

You can export a 2nd set of processed images that are “color corrected” which you can use for Texturing. I typically bring the highlights and shadows back to towards original positions, adjust the white balance to my liking, and export them to a folder called “_texture”.

Reality Capture has a great feature that lets you use different processed versions of the images for different steps (called Image Layers). This is great because you can use this processed dataset for the alignment and reconstruction steps (aka, the geometry layer), but use a secondary color corrected dataset for texturing (texture layer).

Check out this section of my video tutorial discussing the Image Layers feature in more detail.

Before / After
Before / After zoomed in. Notice the dirt marks are much more visible.

Bonus: if you’re using DxO Photolab (which I use almost exclusively), there’s a setting called microcontrast which is extremely useful to reveal features that don’t have enough contrast.

Dxo Photolab Before / After results

Reality Capture Alignment Settings

Sometimes forcing more features to be detected and using a lower image downscale factor might fix your alignment. Use these settings :

  • Max Features per image: 40:000 (default 20,000)
  • Image Overlap: Low (RC uses the full image to extract features)
  • Image Downscale: 2 (by default, I use 3)
  • Force component rematch: Yes
  • Preselector presets: 20,000 (default 10,000)
  • Detector sensitivity: High or Ultra(if ISO for photos is 100). For more noisy photos (when ISO is much higher) use Medium
  • Last resort — Merge georeference components: Yes (only if your photos have GPS tags in them, ie, a drone). However this can introduce bad misalignments.

Isolating and Aligning Failed Connections

Sometimes, you’re confident that all the images of a dataset should align given the fact that they have plenty of overlap but for whatever reason, they don’t.

Here’s an example of a data that should have all the images align, but has resulted in into two different alignment components (let’s ignore the 3rd component with only 4 photos).

Component 1 (yellow X’s highlight where the Component 2 photos should be)
Component 2 (yellow X’s highlight where the Component 1 photos should be)

The trick to fix this is by only aligning the images from the failed connection first, locking their alignment, then aligning the rest of the photos. You’re forcing Reality Capture to try and align the weak connection rather than the strong connections within the components.

I first selected all the images in the component (hotkey: CNTL+A ) and disable them (hotkey: CNTL +R). This can also be done by toggling Enable Alignment to Disable in the Selected Inputs box).

Then, using the Camera Lasso (found in Alignment tab > Selection > Camera Lasso), I selected a few of the images around the failed connection and reenabled them (hot key: CNTL+R)

Enabled images for Component 1
Enabled images for Component 2

Then, when align the enabled cameras, the failed connection is now solved!

Yay!

Once the failed connection is solved, you can re-enable all the disabled cameras and align the full dataset once more.

The result should include the solved connection with all the images from both components aligned. Success!

All images from Component 1 and 2 aligned together

Note: if both the connection fails again, you can lock the alignment of the solved connection first before aligning the full dataset. Select the connection images and in the Selected Inputs box, toggle the Lock pose for continue to Yes.

Locking the alignment for a set of images

That way, you can guarantee that the connection will not be fail when aligning all the full image dataset.

Less is More: Use 1/2 of the Total Images Captured

This one might seem a bit weird but sometimes, the less data you use, the better your results might be.

Why might this be? The more photos you have, the more chances you have of a misalignment, which would reflect in both the mesh reconstruction as well as the sharpness of the texture.

When should you use a less data than the full dataset? Whenever you have a lot of overlap between your photos with tons of redundancy. For example, if you’re doing videogrammetry, you should be using every n’th frame (one out of 15 or 30 frames or so, depending on the video framerate). Similarly, if the dataset was captured using a photo interval capture method on a drone, you might also have more data than you need, so try aligning just half of the photos first and see how the results look before using everything.

Pro tip: to select half or 1/3rd of the photos in a folder, display the photos in the thumbnail tile view, shrink the Explorer window until it displays the thumbnails in 2 columns, then click and drag a box over the 2nd column all the way down.

Congratulations, you have now selected every other image!

Misalignments — Finding and Dealing with Problematic Images

There are certain scenarios where your dataset fully aligns but when you finish the mesh reconstruction, you start to notice areas where sections of the mesh are totally misaligned.

This is due to alignment algorithm finding a way to align all the images where it’s statistically correct but in reality is not.

Here’s a good example of this scenario using the scan of the Monastery of Archangel Thassos by Dimitrios Vogiatzis:

Sunken parts of the mesh as a result of misaligned images

The way to fix this is to first determine which images are misaligned by using the Inspect tool (hotkey: I).

The Inspect tool shows you the strength of the connections between the aligned photos which provides an easy way to notice any outliers.

In this view, you can see how the highlighted photos are not connected to the rest of the dataset (well they are, with just a week connection, most likely only due to GPS).

Weak connected images highlighted

Upon closer inspection and by looking at the contents of the photos, I noticed that those photos have very little overlap between each other (due to the flight planning software not accounting for elevation change) and therefore, are contributing negatively from the total strength of the component.

The best course of action is here was to disable those photos and re-align the rest of the dataset. I’m wasn’t too worried about having missing areas of the scanned location because the area covered by the problematic images were covered by another set of images from a different angle (45°). An alternative solution would be to try and align the problematic photos separately with their neighboring photos, lock their alignment, and merge them with the rest of the dataset to see if the overall alignment improves, however, given that they didn’t have proper overlap with the neighboring photos, it wouldn’t produce good results.

Aligned dataset without the problematic photos

Here’s the mesh reconstruction result. Success!

No more sunken parts of the ceiling!

Manually Assigning Control Points

Control Points should be the last resort option that you use to align a dataset together, the reason being that if done imprecisely, you can introduce a large amount of misalignment to the model itself.

However, with experience, using control points can be a very reliable way of aligning areas that should be connected.

There are plenty of good tutorials on how to effectively use them, so I won’t go into detail here. Here’s one from Reality Capture:

More to come

Over time, as I come across photogrammetry datasets that have issues with alignment, I will use them to add more tips & tricks here.

Good alignment is the root of every clean photogrammetry scan, so it’s worth all the effort to making sure you do things right!

If you have your own tips & tricks (or have feedback/corrections about this post), feel free to contact me with some examples!

Contact / Follow me

Best place to keep up with me and my work is through Twitter. More info about my work can be found on my website as well.

--

--

Azad Balabanian

Photogrammetry @realities.io, Host of the ResearchVR Podcast @researchvrcast