MENU 
Photos of Workflow

My Geo-Tagging Workflow

February 28, 2019  |    0 comments  |  Apps GPS Logs Workflow

In my last post, wherein I outlined how I added a link to each photo’s location on Google Maps, I mentioned that I thought I’d done a writeup on how I geo-tag photos, but couldn’t find it. After extensive searching of the archives, I can confirm that I don’t seem to actually have ever written about my process. Today, I’ll rectify that oversight.

Let’s take a look at the tools I use, then I’ll walk you through how it all comes together.

First, we need a way to record a GPS log. Typically, these are in the format of a .gpx file, which is a type of XML file that can contain a geotrack, which is, at its most-basic, a series of GPS fixes in latitude and longitude along with a timestamp for each point that allows software to reconstruct a route. It may also, but not necessarily, contain speed and heading information. It can also contain a series of waypoints, which are GPS fixes that aren’t linked in a time dimension, so no track can be generated from these as there’s no way of knowing which order the GPS fixes/points were gathered, as order in the file doesn’t necessarily mean that’s the order they were gathered. In addition, we’ll need the time information later to match the time a photo was taken to a spot on our geotrack.

To build a .gpx file while out taking photos, we need a GPS receiver that has the ability to periodically record our current location in a file then export it for consumption by another tool to actually tag the photos. For this, I use a couple of different tools:

When I was using an iPhone as my day-to-day phone, I used an app called MotionX GPS to record my tracks.

Now that I’m using a Google Pixel 2 XL as my phone, I’ve had to find a different app to record tracks as MotionX GPS isn’t available for Android. I tried a few an eventually settled on GPSLogger. The UI isn’t as pretty as MotionX GPS, but it gets the job done.

The next tool we need is a way to merge the location and time data in our .gpx file with our photo data. Since I’m an Adobe Lightroom user (Classic, none of that CC nonsense for me!), I could use Lightroom’s rudimentary built-in geo-tagging feature (a tutorial can be found here), but I prefer some fine control over the process, so I use Jeffrey Friedl’s excellent Geoencoding Support plugin.

This allows us to fine-tune all sorts of fiddly bits in the process, such as correcting for the camera’s time being off a bit from the actual GPS time, which brings us to our next point:

How do I ensure that my time is correct on my camera?

There are a few ways to do it. If you’re lucky, your camera’s companion smartphone app supports synchronizing your phone’s time with your camera’s time, such as the Panasonic Image App (Android/IOS) does. If you’re not-as-lucky, you have to manually sync it, which means opening your camera’s settings and setting the date/time to match the phone’s as close as possible. Fortunately, it doesn’t need to be precise as most geotagging tools, such as Jeffrey’s Lightroom Geoencoding Support, allow for some “fuzziness” to the time matching algorithm.

Once you’ve tagged your photos, you can manually tag any misses in Lightroom’s Map tab. Now you have a precise log of your adventures and the photos you’ve taken.

In addition, you can open the .gpx file in Google Earth and get a nice map of your adventure. Here’s one from a visit I made to the Great Smoky Mountains last year:

So, there you have it, my geotagging workflow. However, I would be remiss in failing to mention that I’m currently testing using a GPS watch (the Suunto Traverse) for tracking as I don’t need to worry about killing my phone’s battery life while logging my location. So far, so good. I might do a write up in the future about my experiences with it.


Focal Length Analysis

March 28, 2013  |    0 comments  |  Cameras Technique Workflow

So, I’ve been contemplating a buying a new lens, but I couldn’t decide on what focal length I needed.  Did I want 11-16? 24-70? 24-105? 100-400?  600?

I could make arguments for any of these, but I was still indecisive.  So, I decided to see what focal lengths I have been shooting at to guide me.  And the best way to do that would be to get some statistical analysis going.  Luckily, this isn’t terribly difficult to do with the right tools.

I use Adobe Lightroom as my image catalog/workflow manager and I knew that Lightroom’s catalog files are simply SQLite databases, storing everything from file system locations of images to EXIF metadata to develop settings. And buried in that EXIF data is the focal length of every image in the catalog.  To get to my analysis, here are the steps I followed:

  1. Select a Lightroom catalog to do analysis on.  I chose my main 2011-2012 catalog, which would provide roughly 60,000 images to glean information from.
  2. Open the catalog using SQLite Database Browser and find the table that contains EXIF data.  This table is AgHarvestedExifMetadata.
  3. Export to csv.
  4. Open in Excel.  Round each focal length to its nearest whole number (some cameras write extremely precise decimal representations of focal length, but we’re only interested in the whole number.
  5. Group by focal length and sum the number of images in each focal length.
  6. Create a line graph.

And voila!:

imageanalysis

As you can see, most of my images fall into the 20-100 range of focal lengths.  Therefore, I would probably get the most use out of something like Canon’s 24-105 L series glass.

Of course, this lens is only f4, so it’s not the fastest.  I could do more analysis on the apertures I’ve used over the last few years as well, but I know from experience that I mostly shoot landscapes and urban photography at f8 or higher, so I should be covered.  Also, today’s cameras’ high-ISO performance and that this particular lens has image stabilization that adds roughly three stops of light should cover me.


What RAW Will Get You

March 6, 2012  |    1 comment  |  Technique Workflow

A lot of people are confused when it comes to RAW vs. JPG, so I just wanted to show you a quick before/after of what kind of dynamic range you can get out of RAW. The before is what it looked like out of the camera, while the after is with the exposure boosted. And while I’d never publish the “after” without some serious post-processing to clean up the banding and noise, you can clearly see that an amazing amount of detail and color information is hidden away in the dark areas of the photo. I, of course, settled on a more sensible final exposure that is more interesting than the “after” shot.

before
after

Processing “The Last Supper”

January 17, 2012  |     2 comments  |  Apps Technique Workflow
Pedro Alves asked in a comment on today’s photo if I could explain the processing.  So I thought I’d give it a quick try.
The original raw photo was shot at f/10 at ISO 100 and a shutter speed of 1/100 of a second, using a polarizing filter to darken up the sky a bit.  After importing to Lightroom, I pre-sharpened and adjusted the white balance, giving me this:
Not very exciting, eh?  I decided to tone map it to bring out the shadow and highlights detail in a sort-of “faux” HDR process.  Since I hadn’t shot multiple bracketed exposures, which would be necessary to do true HDR, I faked it, relying on the pure dynamic range that shooting RAW affords a photographer.
In Lightroom, I created four virtual copies of the photo, giving me five copies altogether, including the original.  I left the original’s exposure value at 0, then set the others at values of +1, +2, -1 and -2 respectively, imitating the bracketed exposures I’d get with a “real” HDR shot.  I then exported these to Photomatix to do the tone mapping, which resulted in this image:
This gave me great detail in the shadows, but killed the sky.  I didn’t really care, though, because I still had work to do.  I imported the original photo with the dark sky I liked and the tone-mapped photo I’d created in Photomatix into Photoshop for further work.
First step was to copy the tone-mapped version into a new layer over the original.  I then created a layer mask which allowed me to use a black paintbrush to “punch through” the tone-mapped layer to the original photo below.  I used a brush with an opacity set to roughly 50% to slowly bring the original sky into the tone-mapped layer.  Once I was satisfied, I applied the layer mask, resulting in a photo that had tone-mapped statues and mountains, but original dark sky.  I then used Topaz Adjust to bring out a bit of detail in the mountains and statues, because I feel like the tone-mapping process leaves the photos looking a bit flat detail-wise.
My next step was to convert to black and white.  For this, I used Nik Software’s excellent Silver Efex Pro 2.  I started with the built-in “high structure” preset, then added a bit of extra structure and a little bit more contrast, while dropping the exposure down a notch or two.  Then, I used Silver Efex’s control points feature to darken up the sky just a bit more while leaving the mountains and statues unaffected. Once this was done, I saved back to Lightroom, did some final noise reduction and a bit of sharpening and posted it to the site.
Here’s a before/after:

 


Tools of the Trade – FlickStackr

August 17, 2011  |    0 comments  |  Apps Technique Workflow

As part of my photoblogging/sharing process, I generally have photos scheduled to be published at 05:30 on my photoblog, where they sit and get viewed and commented upon all day. Then, in the evening, after 19:00 CDT (or 18:00 CST), I upload them to Flickr, giving my site roughly 13-14 hours of exclusivity. The reason for trying to upload to Flickr as close as possible to these times is because that’s when Flickr’s “day” starts (it’s on GMT), which means that uploading at these times is the best way to maximize daily photo views, which are part of the mysterious algorithm Flickr uses to calculate things like “Interestingness” (not that I particularly worry about these things). Also, most people in North America seem to do their Flickr viewing in the evenings, so this time hits a nice spot when my photo will be landing in their “Contacts” photostream.

But how to do the upload? Some people use Flickr’s native upload functionality, but I find this kind of limited. Another option–and one that I occasionally use when uploading from my Mac or my PC–is Flickr Uploadr. Flickr Uploadr has a lot of nice features including the ability to tag photos and put them in sets, but is missing one of the most important–the ability to add a photo to groups from the application, meaning that after you upload, you still have to go into Flickr and add to groups from their interface. Which is okay, but not a favorite task because, for some reason, I constantly get this error when trying to add a photo to groups on the site itself:

(Flickr! Fix your code!)

Another issue with trying to stick to these times is that I’m usually walking our dog, Winston, between 19:00 and 20:00 during these times. Luckily, I have an iPhone with me and can upload on the go. I used to use the Flickr app, but, like the Flickr Uploadr, you can’t add photos to groups. So, after a bit of research, I discovered FlickStackr.

FlickStackr is everything Flickr’s app should be:

  • Profile view

and

  • Actions/Activity view

But the most relevant to this blog post is “Upload” and here are screencaps showing how you can set titles, tags, groups, geolocation and more when uploading:

As you can see, it’s the perfect iOS companion for Flickr users.  And it’s a universal app, so it will work on your iPad at native resolution!


© 1993-2019 Matt Harvey/75Central Photography - All Rights Reserved • Contact license@75central.com for image licensing and other queries.