A few weeks ago, I was on the road quite early for capturing flowers right after sunrise. Unfortunately, they were not blooming when I arrived, because of the too cold temperature we had during the last couple of weeks.
On my way back, I stopped at this huge machine, standing in a vast hole in the ground. I’m standing at the edge of the hole. In the back, you can spot another of these machines right above the edge of the excavation. Also, compare it with the white car. This car is a pickup. So, it’s not that small. I’ve never before been so close to such a huge machine. It’s used as a stacker to put the unusable earth back in the hole because they only want to have the brown coal.
I already published images from that digging pit a couple of times. In this post, published about 10 years ago, you can get a bit of an overview. Or, here, you can see, how it looks at night. While you can here find an image of the hole taken with a fisheye lens.
Although I hate how they treat the earth so badly by grabbing brown coal from the ground for using it inefficiently to burn it for producing electricity power, I find these huge machines really fascinating. Nevertheless, I’m looking forward to the day they are not needed anymore.
Recently, I started analyzing my images a bit. You know, nearly all cameras are writing some metadata in the image files in addition to the image you’re capturing. I dug all this information out of my developed images but left the undeveloped raw data alone. In this analysis, I’ve included only landscape, macro, Astro, and wildlife images, but no people photography like portraits, models, weddings, or similar things.
I installed the open-source software DigiKam on my computer and configured an image directory. All of my developed images are stored in that directory, but in different folders for each trip. You can find out a bit more about my storage principles in one of my past articles.
DigiKam now read in all the metadata from the jpg files and stored them in an SQLite database. After terminating DigiKam, I was able to open the SQLite database with an SQLBrowser and select all the information I want. I first duplicated the database and started then normalizing the information. Over time, I used different software products for developing my images and not all of them used the exact same writing style for naming the different cameras and lenses.
I was very interested in getting to know my most favorite focal length over time. So, this was the first step: selecting the different camera bodies. Here they are listed with their sensor size and resolution in Mega-Pixel.
used from .. to
2008 – 2009
adv. level APS-C
2009 – 2017
2012 – 2014
2015 – 2020
Hint: body 2 was used a lot for wildlife in addition to the common jobs like landscape, portrait, model, event, and weddings until it was replaced by body 4. From that point in time, I used it only for wildlife until it was replaced by body 3, which is nearly solely used for wildlife. That’s the reason for the very high shutter count of body 3. Body 1 and 2 are already sold and body 4 was replaced because of a product recall. I still own body 5 but only using it for portraits or weddings because of the remote flash capabilities. The shutter has a proposed lifetime of 150,000 exposures (5 + 6) respectively 200,000 (3). So, no need to worry.
In the next table, we have the overall usage of a certain lens in combination with one of the camera bodies. The totals are as interesting as the number of images per camera body.
10.5mm f2.8 fisheye
16mm f2.8 fisheye
90mm f2.8 macro
100mm f2.8 macro
105mm f2.8 macro
12mm full manual lens
Hint: I don’t own all of the used lenses. I owned some of them at a certain time and sold them already, while I got borrowed others. But, the cameras I got borrowed for testing purposes are not included in these statistics.
Hint 2: the totals per camera in table 2 don’t correspond to the number of shutter releases from table 1. In table 1 I have the total number of shutter releases from the counter inside the camera. The total per camera in table 2 is the number of developed images. Sometimes, I’m taking security shots and develop only one or doing HDR images, where 3 or more different exposed raw files are merged into one final image to benefit from the expanded dynamic range. Astro images are quite similar to HDRs, but here are tens up to hundreds of raw files merged. In wildlife, portrait, and wedding photography, you also take more images as you need for different purposes.
Hint 3: I left out all portrait, wedding, event, model, and engagement photos because I know the most favorite lens for this purpose: the 85mm prime, the 50mm prime as runner up, followed by the 35mm prime. These lenses are quite old. They were made for film cameras (pre-digital). They are perfectly sharp and don’t have distortions, as all modern lenses have (you usually don’t notice this fact, because of the firmware of the lens and the camera, where the distortion is automatically corrected more or less well. But the corrections have an influence on the sharpness. Therefore I’m preferring the prime lenses.
Next, I will see my most favorite focal lengths (shown as 35mm equivalent), aperture, and ISO values.
The last step will be a script, correcting the wrongly labeled images.
As I proposed last week: the Orion nebula. Orion, the hunter, is present in the winter sky in the northern hemisphere and the nebula can be found in the sword hanging. Orion is located left of the Pleiades.
You can see the nebula even with your bare eyes, but better with a spyglass or a telescope.
I was out that night for photographing the Orion nebula a couple of days ago (ok, literally it’s two weeks tomorrow). It’s located in the sword hanging of Orion right ahead. I liked the situation, how the path leads you directly to Orion. So, I took a wide-angel image first. Next week, I’m showing you the nebula.
You can easily see, how much light pollution we have here. And this is a location with only very little light pollutions in comparison to the situation in the wider area. You might remember my complaints from the past i.e. when I talked about the comet Neowise.
I took this image on Saturday before last. Near the end of this year’s winter. But starting in the beginning.
Friday night two weeks ago we got severe ice-rain. For a week we had temperatures below 0°C. So, the ground was frozen everywhere. The upper areas in the air were warmer than the lower so that we were supposed to get rain instead of snow. But, because of the frozen ground and the low temperatures in the lower air areas, the rain would freeze as soon as it reaches the ground. Very dangerous conditions, when out in the streets. All plants got wet and encased inside the ice because the rain started freezing right after coming to rest on the ground or on twigs as well as on the streets and passways.
The next day, the rain changed to snow and from Sunday the landscape turned into a winter wonderland. Very soft and quite dry snowflakes were laying everywhere, even in the lowlands. You know, I’m 200m above sea level. So, we get about 15-20 cm of snow that weekend. The next few days only a few additional flakes came to accompany those already arrived. Starting on Thursday, the weather changed again: the clouds vanished and the sun came out more and more. On Saturday we had a stainless blue sky, temperatures around -10°C (up -20°C at night).
Perfect conditions for a winter hike!
At around 16:30, when the sun was already quite low (sunset at 17:42), I noticed this golden glow in the trees. Do you remember, I told you about the ice-rain. These ice encasings are the reason, why the trees are capturing all the golden light and glowing so much.
Yesterday, I already showed you an image, how the ice encaged the twigs.
Recently, I was talking with someone about photography. Because that guy is living near Frankfurt, I was checking, where and when I published my images taken in Frankfurt. Surprisingly, they are not here on my blog. The posts are still online but don’t have any images in. I put the images on a separate gallery server that doesn’t exist anymore and set only a link to that location in the post. So, this is kind of a repost.
I was in Frankfurt for a training in November 2009. As I would have been alone in a hotel each night, I took my tripod and my camera with me and planned to go out after the training for taking some night shots in the city. That was my first trip for night photography. The difficulty is to balance the bright lights with the extreme darks while having quite long exposure times.
First I went to a certain skyscraper where you can go on top of the building to have a view over the city. The sky was proposing, unfortunately, it was extremely windy. Setting up my tripod as planned was impossible. The wind simply moved the tripod away. So, I dialed in a quite high ISO to get my shots hand-held without the tripod. The ultra-wide-angle lens allowed me to use a quite open aperture to get a good depth of field and still have the exposure time on a handleable value for hand-holding the camera despite the heavy wind.
At that time, I wasted a lot of quality not only because I had to use high ISO instead of my tripod. I also relinquished to photograph in RAW instead of JPG. For this post, I took out the original images and retouched them as much as possible. But, there was not much possible to recover.
Whenever possible, go the extra mile and photograph in raw. You have so much more quality.
After leaving the tower I also walked a bit through the city. Now, I was able to use my tripod. These images are taken at ISO 200 and aperture times of several seconds each.
What have I learned from that trip?
I should have split that trip into parts to have the nice blue night-sky in all images
I should have closed the aperture more to get nice stars around the small lights
I should have taken more than one shot with different exposure times while leaving the other settings unchanged (bracketing)
In July, I was accepted as a guest blogger at nikonrumors.com. Here’s the full post for you, too.
When traveling, I love to be at the sea. I love the uneven coasts more than sandy beaches. I can spend hours photographing waves rolling in and the water sputtering between the rocks. These sprays, unfortunately, appear in different places but never together at the same time. So, I end up with many, many images of the same scene, but with differences in the details. Wouldn’t it be nice, combining these images?
When having taken the single images by using a tripod all of the frames are identical and could be merged by using an HDR- or DRI-software like i.e. Photomatix or AuroraHDR. But, these programs automatically remove those parts changing from frame to frame. So, what else can be done now?
In the early years of digital photography, the dynamic range of the sensors wasn’t as good as it is nowadays. To increase the dynamic range one took a series of identical frames by using a tripod with different shutter speeds to get images where certain parts were exposed correctly and accepting other parts either underexposed or overexposed. After a mild development in the digital darkroom and exporting the images in TIFF format to preserve the most possible information, the frames were imported into a group of layers in i.e. GIMP, Photoshop, or similar software able to work with layers and layer masks to create the final image. You usually don’t need to create your images with a higher dynamic range this way anymore, because of improved sensors and special software taking over the hard work for you. But this workflow is still useful.
In case, your kind of discouraged now, because I’m talking about layers and masks, don’t stop reading. It’s easier than it seems to be 🙂
I’m describing the necessary steps by using The GIMP because everyone can download the software for free from here (http://gimp.org/downloads). It’s available for macOS, Windows, Linux. For Photoshop the steps are nearly identically.
First, create a folder on your disk and put the original images in this folder. This step isn’t necessary but eases the process and makes it more clear. I took three images for creating my final image, but you can include as many as you want to merge, at least two. Keep in mind, the details you want to reveal shouldn’t overlap.
Now, you can start GIMP and click on “File” and choose “Open as Layers”. In the next dialog, navigate to the folder, you created in the first step and select all images in this folder, you want to merge. Now, you have a pile of images, but you only see the one on top of the pile. When clicking on the eye icon left to each image you can make a single (or more than one) image visible or invisible. You can always see only the uppermost one of the pile with the eye icon switched on. I recommend to re-sort the images now, so that the image with the most details you want to
preserve is the bottom image. For all of the other images, you have to add a mask: the layer mask
There are two kinds of these masks: white (= full opacity) and black (= full transparency). What does this mean? Think of a sheet of paper you would lay on your photo. What do you see? Right, you only see the sheet of paper, but not the photo. Now, imagine taking scissors and cut a hole in the paper and lay it back on your photo. You can still see the while paper but through the hole, you can see a part of the image underneath the paper. That’s the principle of the layer masks. White means cover-up while black means the holes in the cover.
When adding a black layer mask to all image layers expect the background, you can only see the background. Make sure, all images are taken from a tripod and are neither cropped nor re-balanced along the horizon in post-processing. Otherwise, you must rearrange them now to make sure, they are laying exactly one over another. There are tools available to do so. Adjust the opacity of the upper (= moving) image and switch to the 100% view to do so. There are tutorials available online explaining these steps in detail, so check them out if necessary. I recommend leaving these steps to the final image and do as little as possible to the source images.
Pick the brush tool from the toolbox and select a white color. Click in the black layer mask and paint the white color, where you want to make an additional detail visible. Do this for all details and on all layers masks. Simply paint the white color where a certain detail is located you want to discover and be included in the final image. You only need to paint upon the detail included in the image where you’re working on the layer mask. In case, you painted too much and want to revert it, change the color to black and paint over the white, or use the eraser tool. Imagine of doing a collage where you cut out different parts of other media and stick them on a background image.
After discovering all the details you want to include in your final image, save your work file as XCF (the native file format of GIMP) or PSD and then export it to TIFF. Despite you could also merge all layers to one and work further with this file, I recommend exporting and do the final work on this resulting TIFF file like removing dust spots with the stamp tool, balancing the horizon, and cropping the image. So, you could come back and adjust your work without the need of starting from the beginning.
When everything is fine, export it to JPG and you’re done!
You might have noticed some images online on social media created with Luminar 4 beta. I was also able to have a short test on the beta version recently. I gave it a try, hoping the flexibility and image quality of Luminar 2018 would survive and bring it to a new level in terms of supporting more recent cameras, too.
When starting Luminar 4 and looking at the user interface, it’s very similar to the older versions. As a user of the older versions, you’ll feel being at home at once. Slowly, you’ll discover the improvements, as most of them are under the hood.
For those of you, not familiar with older Luminar versions: it’s a photo editing software in means of processing and enhancing an image just the way the lab did in the old film days. It’s not for doing compositions and montages. It’s for processing raw files and develop them as well as enhancing jpg files i.e. lighten the darks, correct the horizon, remove dust or noise, correct distortions or enhance contrast. All the tools are organized into 4 groups (essential, creative, portrait and pro), plus raw development (canvas) and levels. All edits are done without changing the original file
A very important improvement is the progress in the AI filters. AI is short for “artificial intelligence” and means, the software is analyzing the image and tries to improve it in a means of a very natural mood. I tried it with some of my own images and I was very impressed. Here, in this post, I included an image taken during my recent trip to the baltic sea. It was taken at a windy fall afternoon. The sky was mostly grey with some small blueish areas in between. Not, what you want to have in your images. The bracket fungus on this tree is located on the shadow side of the tree. So, we have a kind of backlit scene that helpt blowing out the sky in the image while having the correct exposure for the tree and the fungus.
You know, I’m a raw shooter and don’t use the out-of-camera jpg’s, because I know, the raws have much more details, which are lost when shooting in JPG only.
The next image is still the same raw but handed over to the AI of Luminar 4. I know, other raw developer software can do the same, but it’s not as easy as with Luminar 4.
No further processing, then simply clicking on the AI for analyzing and improving the sky parts. Done!
But, the AI can do much more. You can also use it for replacing the sky. Although I don’t need it in my workflow and don’t like such editing in general, I tried it for you. Luminar 4 comes with a set of different skies, but you can also use your own skies. So, you could take a photo of the sky in addition to your photo and combine them in Luminar 4 for getting the final image.
There are many more options to try and to use for improving your images. In general, the improvements look very natural and much better than they look after using HDR software to process them.
I don’t want to conceal a disadvantage of Luminar 4: Just like Luminar 3, all your edits are saved in the Luminar catalog. Maybe, saving the edits as separate files will come back, as several testers brought this up as a complaint. Remember, it came back with releasing Luminar Flex as a result of the complaints on this same behavior when Luminar 3 was released. Luminar 3 and Luminar Flex are the same, but with the difference in the style of saving the edits: Luminar 3 saves them in the catalog while Luminar Flex saves them as separate files.
Advertisement because of set links:
Currently, Skylum offers Luminar 4 with a Launch-Discount either as a pre-order or you can get Luminar 3 at once and Luminar 4 as soon, as it is released. Remember, the release is just around the corner: Nov. 18th, 2019!!! So hurry, to save some money and get Luminar 4 as soon as possible. As in the past, Luminar 4 is for Windows as well as for macOS. You can use it either as stand-alone software or as a plug-in to Lightroom, Photoshop, Photoshop Elements, Apple Photos. On the other hand, other plug-ins like Aurora HDR or Nik Filter are usable from inside Luminar 4.
Nevertheless, when you can live with this disadvantage, Luminar 4 is a fantastic software to bring up the details in your images without too much work. So, it can ease your workflow when improving your images! Not convinced yet? Skylum offers a trial period with a money-back guarantee for 30 days!
Recently, I got a review copy of Luminar 3. You know, you get it as a free upgrade, if you own a copy of Luminar 2018. When having paid for an older version, you can get it for a reduced price.
This version finally brings the long-awaited feature “Library”.
Unfortunately, this is simply a remake of an editing module, where all edits are saved in a library, instead of single files just like the previous versions. But, first things first.
When I started Luminar 3 right after installation, I was asked, where my (raw) images are. I save them in a structure on my hard disk until editing the job is finished. In the past, I explained that principle for you in a separate blog post. But, only jpg-files popped up in the light-table view of Luminar 3. After a while, I decided to restart Luminar, and surprisingly, my raw-files also appeared on the screen. But, they didn’t appear in chronological order. Instead, the images of all folders were mixed. Raws mixed with jpgs and all of them seemed to follow no specific order. What a chaos! I was unable to find my raws, instead I opened accidentally an already edited image. I was totally confused.
Later, I found out, I can browse the folder structure on the right to find a specific folder. But, it still offers all image files (raw and jpg). Until now, I can’t say, how to tell Luminar to ignore sub-folder.
When ready editing an image, you can export it to jpg. But, you can’t save the edited file. All of your edits are saved to a database. I have no idea, how to backup this database, to finish the edits at a different computer. I even don’t know, how to exclude the edits from the database, when I’m ready with the edits at a certain pile of images (finished a job). In the past, I simply moved the source folder to a NAS and the edited files to a different NAS, while the final jpgs go to my fileserver.
While Skylum destroyed Luminar, they didn’t bring the really needed part: a library for the finished jpgs where you can store the metadata: GPS, camera, lens, all of the exif data and all the necessary categories as well as the tags. These are the important information I need in a database together with thumbnail images in low quality and an external link to the fileserver, where the final jpg-file resists, to find certain images quickly.
Up to now, I can only say, it’s unusual! Stay with Luminar 2018 and don’t do the upgrade! It’s simply a copy of the mechanism already known from Adobe Lightroom (but, without the necessity to subscribe to a plan, that continuously costs you monthly fees to use it).
Here starts an ad:
So, when you’re still willing to give it a try, you can get it here. The trial is free. The final version will be available from Dec, 19th, 2018.
When using the code “SOLANER” during checkout, you can a few bucks.
You know, I was in Switzerland in August, where I did several hikes. When on a hike in beautiful landscape you can’t always stop and wait for the prefect light conditions. So, you have to cope with the light you have. In my case, we had a wonderful sunny day with only few tiny white clouds ahead of us. But, we were below a huge gray cloud.
While we walked uphill along this creek, I liked the perspective very much. But, because of the light conditions, the image would come out very ugly: either I’d get a dark foreground (my main subject) with a beautiful background a sky or I would get a perfect exposed foreground with a white sky and an overexposed mountain range in the back.
The solution is taking at least 3 images of the same frame: 1 over-exposed (at least + 1EV), 1 under-exposed (at least -1 EV) and 1 for the middle. Without a tripod (who takes a tripod along on a hike through the mountains?) it’s a challenge to get these three images without any movement.
Back at home you can take these three images and combine them on your computer. You can do it by hand using i.e. The Gimp. Or you can use a specialist for this job. One of these specialists is Aurora HDR by Skylum. In the past I already written about this software and I like it. Although I don’t take HDR images very often, I use it for this kind of job every now and then, because it’s so easy to get good images from bad lighting situations. Fortunately, Aurora HDR is able to eliminate slight movements when the images were taken without a tripod. It is also able to eliminate ghost fragments, when a part in the image moved (i.e. animals, people or cars). And it is able to read the raw images of my camera, so that I don’t have to develop them first.
Recently I got a review version of the upcoming version of Aurora HDR 2019 and checked it out with some recent images like the one above. First of all, the user interface looks familiar when comparing it to the previous versions. The auto-aligning, anti-aliasing and the ghost-detection works very well, just like before. After combining the source images, the user interface changes and offers a couple of presets in different categories, similar to the previous versions. The presets give you a good starting point to finalize the image.
Despite, I don’t like these ugly, over-saturated, typical HDR images, I like the natural results I get with Aurora HDR. If you want, you can get these typical HDR images as well, as very natural images. The results with Aurora HDR are much better, as only increasing the deeps and decreasing the highlights in the raw editor.
For the next days you can preorder your copy of Aurora HDR with a discount. Owners of a previous version of Aurora HDR get the new version for a reduced price.
(This post contains advertise for Aurora HDR 2109 by Skylum)
Upgrade was as easy as usual: simply drawing the app in my Applications folder. I had the feeling, the software start doesn’t need as long as before. The interface seemed familiar without any noticeable changes. All presets seemed to be still available. Also, the workflow is the same.
So, I took some of my images for my crane trip last fall and developed them from raw again.
I was quite impressed by the results when comparing the outcome with the one from last fall using Luminar V. 1.0.0: more details, better results in the mid-tones and much better noise-reduction. The noise reduction is so good now, than I’m considering deleting the old app “Noiseless CK”.
For me, a good noise reduction is crucial. When doing wildlife photography, I have to use high ISO settings because I want very short shutter speeds for getting sharp images. You know the apertures triangle: ISO, shutter speed and aperture. As I usually have to use long focal lenses, which are not so fast as shorter focal lenses because of physical limitations. Additionally, the longer a lens, the smaller the field of depth is. This brings in another level of light shortage.
Some of the other new features are:
higher speed during import and processing
automatic distortion correction
improved Demosaicing and green balance
support of DCP profiles (Mac)
higher speed when importing raw images (Mac)
the functionality of the Windows version is adapted to the Mac version by adding support for batch processing, free transformation, rotation and mirroring
Luminar 2018 Jupiter comes as a free upgrade for all current users of Luminar 2018. Users having a previous version of Luminar are eligible for upgrading on a reduced rate. For those of you, not having Luminar already, might consider giving it a try. There’s a free evaluation version available for download for MacOS and for Windows.
When using this code “SOLANER” you can save some money and get your perks anyway 😃.
You know, I mentioned Luminar and Aurora by Macphun several times here in my blog and I’m using them on a regular basis for my images, as well as the older product Tonality Pro.
Macphun will now officially be known as Skylum!
In order to celebrate this, they’ve prepared special exclusive bonuses and freebies which are included to every purchase of Luminar or Aurora HDR. Both software products are available for Mac and Windows.