Forums > Photography Talk > Deconvolution as a sharpening approach

Photographer

Kevin Connery

Posts: 17824

El Segundo, California, US

I just finished reading John Russ’ Image Processing Handbook. My brain now hurts, even though I skimmed past many of the sections not directly applicable to 2D visible-light photography.

However, he did discuss the use of deconvolution as a sharpening technique, and I was wondering if anyone was familiar with any non-scientific utilities to perform this on digital images. Raindeer Graphic’s Fovea Pro has an interactive deconvolver, but the package is overkill (and at $800, a bit expensive for this purpose). The Refocus utility in their Optipix package ($150) seems reasonable, but I’m curious if anyone had experience with that product or a similar deconvolution tool.

From what he wrote, it seems as though it would be the best solution to removing the softening due to a digital camera's anti-aliasing filter--better than any of the common capture sharpening approaches.

Apr 17 09 04:43 pm Link

Photographer

Lumigraphics

Posts: 32780

Detroit, Michigan, US

Kevin Connery wrote:
I just finished reading John Russ’ Image Processing Handbook. My brain now hurts, even though I skimmed past many of the sections not directly applicable to 2D visible-light photography.

However, he did discuss the use of deconvolution as a sharpening technique, and I was wondering if anyone was familiar with any non-scientific utilities to perform this on digital images. Raindeer Graphic’s Fovea Pro has an interactive deconvolver, but the package is overkill (and at $800, a bit expensive for this purpose). The Refocus utility in their Optipix package ($150) seems reasonable, but I’m curious if anyone had experience with that product or a similar deconvolution tool.

From what he wrote, it seems as though it would be the best solution to removing the softening due to a digital camera's anti-aliasing filter--better than any of the common capture sharpening approaches.

ImageJ (started life as NIHImage) is a F/OSS scientific image processor that will do this and a bazillion other tricks.

http://rsbweb.nih.gov/ij/

Apr 17 09 06:44 pm Link

Photographer

K E S L E R

Posts: 11574

Los Angeles, California, US

Is this the same thing that they use to make an out of focus shot in focus?

I saw a demo a while back where they showed a car license plate that was out of focus... ran it through some software and presto!  all the cars license plate numbers came up.

Apr 17 09 07:10 pm Link

Photographer

Lee K

Posts: 2411

Palatine, Illinois, US

Try Focus Magic.

Apr 17 09 08:09 pm Link

Photographer

190608

Posts: 2383

Los Angeles, California, US

Has anyone played with the Custom filter? smile

Apr 17 09 08:41 pm Link

Photographer

QuaeVide

Posts: 5295

Pacifica, California, US

PS CS 3&4 -> Smart Sharpen

Apr 17 09 09:15 pm Link

Photographer

Paul Brecht

Posts: 12232

Colton, California, US

Apr 17 09 09:21 pm Link

Retoucher

Kevin_Connery

Posts: 3307

Fullerton, California, US

RONALD N TAN wrote:
Has anyone played with the Custom filter? smile

Used to, but around v6 or so, I've been using the free Reindeer Graphic's version. It supports non-integer coefficients in an up to 7x7 neighborhood instead of the built-in 5x5 integer-only one. But that's convolution, not deconvolution.

I'll take a look at focusmagic and do some further digging. I hadn't realized Smart Sharpen included deconvolution--that, too, needs some research.

Thanks.

Keep 'em coming!

Apr 17 09 09:28 pm Link

Photographer

WMcK

Posts: 5298

Glasgow, Scotland, United Kingdom

Paul Brecht wrote:
Unshake - free:

http://www.hamangia.freeserve.co.uk/

Paul

I've tried that, and while it produces some increase in clarity, it's effects are very limited.

Apr 18 09 04:17 am Link

Photographer

Robert Randall

Posts: 13890

Chicago, Illinois, US

Lumigraphics wrote:

ImageJ (started life as NIHImage) is a F/OSS scientific image processor that will do this and a bazillion other tricks.

http://rsbweb.nih.gov/ij/

That is a cool program, thanks for posting the link, and thanks Kevin for starting the thread. Always fun to find out how little I really know.

Apr 18 09 09:21 am Link

Photographer

nwprophoto

Posts: 15005

Tonasket, Washington, US

Kevin Connery wrote:
I just finished reading John Russ’ Image Processing Handbook.

Expensive book. Worth it?

Apr 18 09 09:39 am Link

Photographer

Robert Randall

Posts: 13890

Chicago, Illinois, US

nwprophoto wrote:

Expensive book. Worth it?

cheaper here!

http://usedmarketplace.borders.com/book … hn+C.+Russ

Apr 18 09 09:47 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

I started to play with this a couple of weeks ago while trying to work with some badly OOF / motion-blurred shots which came my way.  I don't have all the sites I was referencing in front of me (will dig them up if there's interest), but my own opinion came to be that the Richardson-Lucy algorithm gave the cleanest + most effective results in deconvolving images for capture and / or output sharpening.  I might argue that some of the advantage which DMF users benefit from is a well-applied version of a similar algorithm in camera, before writing the raw data.  Raw Therapee is one free program which gives an R-L implementation up to 2.5px radius - good enough for most all of us, with the exception perhaps of Mr. Randall and his new toy wink.

FWIW (or more likely not), playing with all this has led me to modify my web output techniques markedly to use the following, stupidly complex, process.  I use this vs. R-L simply because it gives me a bit more control over how much I sharpen the finer details.
  1.) GB of original image at ( (original-long-axis resolution / new resolution) * 0.25)px. [reduce resizing artifact]
  2.) Resize (bicubic) to new resolution.
  3.) Duplicate image x2.
  4.) GB first (bottom) duplicate @ 0.3-0.6 (wherever I would expect USM to just give me artifacts).
  5.) Select second duplicate.
  6.) Apply Image - Subtract / GB'd layer as source / scale 2 / offset 128.
  7.) Set second duplicate to Linear Light.
  8.) Create Curves adjustment layer linked to second duplicate - points at I/O (1,0) + (255, 254); set to 50% opacity & merge.
  9.) Smart Sharpen second duplicate - 300-500% @ (0.5 * GB radius) / remove GB.
  10.) Smart Sharpen first duplicate to taste - 25-100% @ GB radius / remove GB.
  11.) Merge duplicate layers, set blend mode to Luminosity, adjust opacity to taste.

And yeah, I use an action for that.

Edit to note, the idea to separate the fine detail sharpening is not my own.

Edit 2: Action can be downloaded here (link is to wrong action! sorry!).  It does assume that it is already working a duplicate copy of the final image.

Edit 3: Action is optimized for working in a 16bit environment and in current form the spatial frequency separation demonstrates only a peak 1/32767 variation from the original image.  If 8bit types would like help perfecting the sep without going to 16bit, drop a note and I'll be happy to share findings.

Apr 18 09 11:58 am Link

Photographer

Monito -- Alan

Posts: 16524

Halifax, Nova Scotia, Canada

GB = ?
DMF = ?

Apr 18 09 12:01 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Gaussian Blur
Digital Medium Format

Sorry - acronyms are an occupational hazard smile.

Apr 18 09 12:03 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Links to some examples using deconvolution as the sharpening technique:

Two worthy sets of comparisons here:
  Richardson-Lucy Deconvolution + others @ Open Photo Forums
  R-L vs USM in Raw Therapee

Explanation for blurring your images before downsizing here: Down Sampling Methods

And a bit more on the downsampling issue + sharpening in a workflow context on OPF.

Apr 18 09 08:41 pm Link

Retoucher

Kevin_Connery

Posts: 3307

Fullerton, California, US

nwprophoto wrote:

Expensive book. Worth it?

Not unless you're a math geek or involved in image analysis. While the author works to keep the math load down, it's not going to be easy reading to anyone unfamiliar with at least the basics of image processing under the hood.

It's not aimed at photographers. It's aimed at forensic analysts, astronomers, biologists, and so on. Scientists, as users of images, who need to extract the most usable information from an image's underlying data. Many of the techniques are wonderful for isolating aspects of an image from what would be--in that situation--noise, but those techniques would in most cases destroy the image's value as a photograph. (Some examples would be removing texture from the surface under a fingerprint, or removing lines from the background of a check to permit computer comparisons of the handwriting, or edge detection, or element correlation, or...)

I've been following Jim Hoerricks' Photoshop Forensics blog, and figured there might be some techniques applicable to photography. There are, but most of them are most readily available via more user-friendly routes. Deconvolution, moire removal via FFT, 'top hat' noise handling (dust removal) and a few others still tend to be very uncommon. (But not entirely unknown: focusmagic, Optipix, ImageJ, and a few others support them.)

As for 'worth it': I got my copy from the library. It's useful, and I'm glad I read it, but it's not something I need to own. If I worked in a scientific lab, I'd probably want my own copy.

Apr 19 09 12:01 am Link

Photographer

epo

Posts: 6196

Columbus, Ohio, US

lots of cool new stuff to look at here.  Thanks everyone.

Apr 21 09 12:28 pm Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Sean Baker wrote:
I started to play with this a couple of weeks ago while trying to work with some badly OOF / motion-blurred shots which came my way.  I don't have all the sites I was referencing in front of me (will dig them up if there's interest), but my own opinion came to be that the Richardson-Lucy algorithm gave the cleanest + most effective results in deconvolving images for capture and / or output sharpening.  I might argue that some of the advantage which DMF users benefit from is a well-applied version of a similar algorithm in camera, before writing the raw data.  Raw Therapee is one free program which gives an R-L implementation up to 2.5px radius - good enough for most all of us, with the exception perhaps of Mr. Randall and his new toy wink.

FWIW (or more likely not), playing with all this has led me to modify my web output techniques markedly to use the following, stupidly complex, process.  I use this vs. R-L simply because it gives me a bit more control over how much I sharpen the finer details.
  1.) GB of original image at ( (original-long-axis resolution / new resolution) * 0.25)px. [reduce resizing artifact]
  2.) Resize (bicubic) to new resolution.
  3.) Duplicate image x2.
  4.) GB first (bottom) duplicate @ 0.3-0.6 (wherever I would expect USM to just give me artifacts).
  5.) Select second duplicate.
  6.) Apply Image - Subtract / GB'd layer as source / scale 2 / offset 128.
  7.) Set second duplicate to Linear Light.
  8.) Create Curves adjustment layer linked to second duplicate - points at I/O (1,0) + (255, 254); set to 50% opacity & merge.
  9.) Smart Sharpen second duplicate - 300-500% @ (0.5 * GB radius) / remove GB.
  10.) Smart Sharpen first duplicate to taste - 25-100% @ GB radius / remove GB.
  11.) Merge duplicate layers, set blend mode to Luminosity, adjust opacity to taste.

And yeah, I use an action for that.

Edit to note, the idea to separate the fine detail sharpening is not my own.

Edit 2: Action can be downloaded here.  It does assume that it is already working a duplicate copy of the final image.

Edit 3: Action is optimized for working in a 16bit environment and in current form the spatial frequency separation demonstrates only a peak 1/32767 variation from the original image.  If 8bit types would like help perfecting the sep without going to 16bit, drop a note and I'll be happy to share findings.

https://www.modelmayhem.com/po.php?thre … st11166541

big_smile

Oct 07 09 07:42 pm Link

Photographer

Warren Leimbach

Posts: 3223

Tampa, Florida, US

Lumigraphics wrote:
ImageJ (started life as NIHImage) is a F/OSS scientific image processor that will do this and a bazillion other tricks.

http://rsbweb.nih.gov/ij/

I had never heard of Deconvolution or ImageJ.  I just stumbled on this thread, downloaded ImageJ and noodled around.  I couldn't find a "Deconvolve" command, but found the Filter>> Convolve.

I don't know if this qualifies as proper Deconvolve Sharpening but it did have a nice sharpening effect. 


Here's what I did:

1)  Open an image in ImageJ
2)  Process>> Filter>> Convolve        - The result looks similar to 'glowing edges' neon chalk drawing filter in Photoshop.
3) Save the image under a new name.
4) Close ImageJ and Open both the original and Convolved images in Photoshop.
5)  Using the original image as the background layer, copy and paste the Convolved image onto Layer 2.
6)  Set layer 2 blend mode to "Soft light" and Opacity to about 5-10%  Opacity needs to be set carefully to avoid a glowing halo.

Final result looks like the original with a pretty decent edge sharpening.    I am not sure how it stacks up to other sharpening techniques yet.  Is anybody else sharpening photos this way?



Sample PSD file available for download here:
http://www.pixoasis.com/va.php?hash=18d … 6c96f3b04b

Nov 09 10 09:28 pm Link

Photographer

Monito -- Alan

Posts: 16524

Halifax, Nova Scotia, Canada

Warren Leimbach wrote:
I had never heard of Deconvolution or ImageJ.  I just stumbled on this thread, downloaded ImageJ and noodled around.  I couldn't find a "Deconvolve" command, but found the Filter>> Convolve.

I don't know if this qualifies as proper Deconvolve Sharpening but it did have a nice sharpening effect.

Deconvolution is simply convolution with a reverse kernel.  The original image was convolved with an convolution kernel to blur it and applying the reverse kernel will deblur it.  However, some information is lost in the process, so we can't expect perfection.

The original convolution might be performed by the lens or other optical components ("protectant" filters for example), or atmospheric haze or heat turbulence, etc.  A lens is a kind of parallel optical computer, performing a computation on each ray of light simultaneously.

There is a technique called homomorphic deconvolution that I studied many years back in university.  With it, one can analyze the image and automatically extract some information from it about the function that blurred it and use that information to construct a deconvolution kernel to sharpen the image.  In examples, unreadable license plate numbers became readable.  It applied to gaussian blur types as well as motion blurring.

Russ's book is quite good as a compendium of techniques.  I got a copy a few years ago.

Nov 10 10 03:35 am Link

Photographer

Warren Leimbach

Posts: 3223

Tampa, Florida, US

Monito -- Alan wrote:
Deconvolution is simply convolution with a reverse kernel.

Interesting.  So how do I reverse a kernal?


In ImageJ when I choose Process>>Filter>> Convolve  I see this grid:

-1 -1 -1 -1 -1
-1 -1 -1 -1 -1
-1 -1 24 -1 -1
-1 -1 -1 -1 -1
-1 -1 -1 -1 -1

  Is that the kernal?

If I input different values, say positive numbers instead of the negative numbers, will that "reverse" the kernal?

Nov 10 10 04:34 am Link

Photographer

Monito -- Alan

Posts: 16524

Halifax, Nova Scotia, Canada

Warren Leimbach wrote:
Interesting.  So how do I reverse a kernal?

Yes, the number array is the kernel (note spelling).

I don't know, sorry.  It's been a while since I worked with any of that stuff and I can't find my copy of Russ.  I think it is non-simple to invert a kernel, because it seems that some kernels lead to numerical instabilities if they are inverted.

Nov 10 10 05:16 am Link

Photographer

NothingIsRealButTheGirl

Posts: 35726

Los Angeles, California, US

I have been using Focus Magic for this for some time. I think it's great in tiny 1 or 2 pixel amounts on the L channel in Lab

http://www.focusmagic.com

Nov 10 10 08:15 am Link