Forums > Digital Art and Retouching > HighPass Sucks (+ solution)

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Damien Menard wrote:
Thanks so much for the seperation walkthrough! I played with some of the bandstop-type techniques and didn't like the look, but just being able to work on the different freq's using the usual tools has made things so much easier.

I tried to repeat Sean's process on the high-frequency layer in order to further seperate the image, but didn't have much luck. It seems like blurring the high-freq layer just results in a blurry highpass-looking thing, instead of reverting it back toward the unseperated image.

Would anyone be able to post a walkthrough similar to Sean's for further seperating the image after going through Sean's technique?

Up above on the previous page I posted a link to a "Sharpen.jsx" script for Photoshop that will do further separations. It currently does 3 separations giving you 3 High Frequency layers each clipped with a curves adjustment layer. You can then adjust the curves for more or less contrast on that frequency as well as adjusting opacity.

I don't have much time now, but I can maybe later give a rundown of what it does.

Oct 08 09 11:17 am Link

Photographer

Damien Menard

Posts: 3

Portland, Oregon, US

Photons 2 Pixels Images wrote:

I don't have much time now, but I can maybe later give a rundown of what it does.

Thanks, I appreciate it.  Running curves on individual frequencies is a cool idea, but right now I'm looking for something that'll keep the combined layers looking as close to the untouched background layer as possible.

Oct 08 09 11:37 am Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Damien Menard wrote:

Thanks, I appreciate it.  Running curves on individual frequencies is a cool idea, but right now I'm looking for something that'll keep the combined layers looking as close to the untouched background layer as possible.

If you adjust the curves to lower the contrast (inverse "S" curves) along with the opacity, you can achieve this. The default is set to oversharpen. I am working on a method of determining the exact settings to get an exact replica of the Background, but so far it seems to be elusive.

As for further separating the High Frequency layer, I don't think it will work since it's a 50% neutral gray layer type.

If I figure anything else out, I'll let you know. smile

Oct 08 09 12:02 pm Link

Photographer

PANZERWOLF

Posts: 68

Vienna, Wien, Austria

Sean Baker wrote:
Findings / Technique: In my own experimentation, I've found that HP gives differences as high as 2670/32k per pixel when separating high and low frequency information.

what is your exact method to separate HF and LF with HP?
in 8bit i found HP to give slightly better results in some areas, like out-of-focus fine hair detail, at least to my eyes
example (layered tif crop,

Oct 10 09 07:08 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

PANZERWOLF wrote:
what is your exact method to separate HF and LF with HP?
in 8bit i found HP to give slightly better results in some areas, like out-of-focus fine hair detail, at least to my eyes
example (layered tif crop,

Oct 10 09 12:04 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Photons 2 Pixels Images wrote:

If you adjust the curves to lower the contrast (inverse "S" curves) along with the opacity, you can achieve this. The default is set to oversharpen. I am working on a method of determining the exact settings to get an exact replica of the Background, but so far it seems to be elusive.

As for further separating the High Frequency layer, I don't think it will work since it's a 50% neutral gray layer type.

If I figure anything else out, I'll let you know. smile

I'm not sure if I'm on the same sheet as you guys are discussing here, but it's possible to create near infinite levels of separation without meaningful loss (albeit loss does become additive per iteration).  Here is a multi-layer TIFF (~2.8MB) containing 6 orders of frequency separation with a cumulative loss of roughly (6,5,4) [R,G,B] (16bit) in the worst areas.  This done by duplicating and re-blurring the 'LF' layer repeatedly.  Though the provided file is in 8bit for size purposes, all separation was done in 16bit for accuracy.  Hope that helps.

Oct 11 09 08:08 am Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Sean Baker wrote:

I'm not sure if I'm on the same sheet as you guys are discussing here, but it's possible to create near infinite levels of separation without meaningful loss (albeit loss does become additive per iteration).  Here is a multi-layer TIFF (~2.8MB) containing 6 orders of frequency separation with a cumulative loss of roughly (6,5,4) [R,G,B] (16bit) in the worst areas.  This done by duplicating and re-blurring the 'LF' layer repeatedly.  Though the provided file is in 8bit for size purposes, all separation was done in 16bit for accuracy.  Hope that helps.

Yes, you're on the same page. That's basically what the latest script I posted does. It gives a GB dialog for the first separation, then automatically separates at 1/2 that radius for the second, then 1/4 radius for the third and stacks the HF layers onto the last GB layer.

I think what he's saying, and what I've noticed, is when you do this you will see an additive effect on the HF layers so it won't look the same as the original. I've noticed the same thing, but usually will adjust the curves and/or opacities of the HF layers to bring it back to the same appearance. This allows me to filter out specific frequencies while still retaining some of the detail.

I've also noticed that the appearance differs depending on the stacking order. If you stack top to bottom HF1, HF2, HF3 it looks different than if you stack them in reverse order.

I'm trying to figure out if there is a mathematical relationship between the opacities of the HF layers and how they are stacked that will result in an image appearance the same before and after separation regardless of how many separations are used.

Oct 11 09 08:53 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Photons 2 Pixels Images wrote:

Yes, you're on the same page. That's basically what the latest script I posted does. It gives a GB dialog for the first separation, then automatically separates at 1/2 that radius for the second, then 1/4 radius for the third and stacks the HF layers onto the last GB layer.

I think what he's saying, and what I've noticed, is when you do this you will see an additive effect on the HF layers so it won't look the same as the original. I've noticed the same thing, but usually will adjust the curves and/or opacities of the HF layers to bring it back to the same appearance. This allows me to filter out specific frequencies while still retaining some of the detail.

I've also noticed that the appearance differs depending on the stacking order. If you stack top to bottom HF1, HF2, HF3 it looks different than if you stack them in reverse order.

I'm trying to figure out if there is a mathematical relationship between the opacities of the HF layers and how they are stacked that will result in an image appearance the same before and after separation regardless of how many separations are used.

My gestalt is that the issue with the script is coming from the fact that each separation isn't being done from the same base image, as well that the LF layer isn't retained under it, making perfect recreation of the original impossible.  The unfortunate truth of GB is that it isn't a perfect frequency separation, but only an approximation.  Hence, running a 4px GB on itself repeatedly will continue to change the base image.  Equally, running multiply on separate copies will not only result in keeping extra copies of the very fine details, but will also result in different 'versions' or 'interpretations' of the intermediate frequencies.  The mathematical issue with layer ordering I suspect has to do with the way we minimize the fine details' numerical values in the technique, leading to their being averaged away by the larger structures in recombination.  Frustrating to be sure, but I'm not sure there's a better way with the available blend modes and their implementation - maybe something else to hope for in CS5?

Oct 11 09 09:19 am Link

Photographer

A Personal Travesty

Posts: 539

Hoover, Alabama, US

There's so much information in here that this post may be redundant or rendered moot by stuff that's come since the initial explanation.

I've been using the a modification on the methods involved here for sharpening for a while, and as I recently upgrade my system to Windows 7 I needed to rebuild the action I used to do it.

I thought I'd share the steps here in case they might be of value.

The main difference in my method is once you have the channel separation I make two copies of it and use those to further isolate the highlights the separation captures as well as the shadows.  This allows me to adjust both separately (fine tuning the results to the image).

Here are the steps, which have been saved to an action that expects to start with a 16bit complete image layer (so make a stamp of all the layers if you have adjustments between this action and your base image):

1) Copy Layer - Rename to Gaussian Blur
2) Copy Layer again - Rename to Separation
3) Select Gaussian Blur layer
4) Apply Gaussian Blur layer to desired level
5) Select Separation Layer
6) Apply image using Gaussian Blur layer inverted, add, scale 2.
7) Go into channels
8) Make copy of red channel (and channel, they should all be virtually identical).
9) Rename to HL Mask
10) Apply curves raising shadows to 128 (blacking out all detail less than middle gray), and lowing highlights to 191 (accentuating highlight details for mask)
11) Make second copy
12) Rename to SH Mask
13) Invert colors in SH Mask (So that mask highlights shadow detail)
14) Repeat step 10 on SH Mask (exact same levels will not remove highlight details and accentuate shadows in the mask)
15) Select HL Layer (Ctrl Click on the channel icon)
16) Return to layers pallette
17) Create Curves adjustment layer (will automatically apply HL Mask as mask)
18) Rename adjustment layer to Highlights
18) Select SH Mask
19) Create Curves adjustment layer
20) Rename to Highlights
21) Delete layers Gaussian Blur, and Separation

A lot of steps, but what you should have at the end is two curves you can use to make isolated adjustments to the shadow edges and highlight edges of your image.  The radius of those edges was determined by the value you used in the gaussian blur, and the amount is controlled by your curve.  This allows you to make very fine tuned adjustments to the sharpness of both the highlights and shadows in your image.

Thanks again for the info that became the foundation of this technique.

Oct 11 09 09:54 am Link

Photographer

Brian T Rickey

Posts: 4008

Saint Louis, Missouri, US

I am wondering of there is a paper saving way of printing this thread?

Oct 11 09 10:15 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Brian T Rickey wrote:
I am wondering of there is a paper saving way of printing this thread?

I'd go through and just pull out the posts which you find most valuable, though I'm sure there are some print wizzes with better ideas than that.  And I'm equally curious to read them wink.

Oct 11 09 10:48 am Link

Photographer

PANZERWOLF

Posts: 68

Vienna, Wien, Austria

Sean Baker wrote:
Looking at the file you posted up (love that you gave an example btw) it looks like you used the 16bit technique for the method outlined in this thread on an 8bit image, thus imparting the slight difference from the original to the result.

no, i used your 8bit method (in 8bit mode) and compared it to my HP method

Sean Baker wrote:
OK, technique when using the actual HP filter should be: Create two copies of the image.  Run HP on the top copy; set blend to Overlay+100% or LL+50%.  Run GB on the bottom copy; blend mode Normal+100%.  The problem is that the image doesn't look the same as it did before, illustrated in the example posted here.

oh, i see
but setting the LL layer to 50% opacity is not the same as reducing the contrast of the layer to 1/2!
both overlay/100 and LL/50 give truly horrible results (resulting in your high difference), but that's not what i'm doing

in my tif file you can see the HP layer (just HP, no alterations) on LL, 100% opacity,
and on top of it the curves layer that lowers the contrast
in 8bit, this is (at least) as accurate as your method
in 16bit, i find yours to be a tad more accurate in most cases, but i guess for both these are mere rounding errors, not by far the extent shown in your difference example

am i correct that your initial HP bashing and he figure 2670/32k was based on an incorrect HP method, or am i still missing something?

Oct 11 09 01:25 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

PANZERWOLF wrote:
no, i used your 8bit method (in 8bit mode) and compared it to my HP method

Gotcha.

PANZERWOLF wrote:
oh, i see
but setting the LL layer to 50% opacity is not the same as reducing the contrast of the layer to 1/2!
both overlay/100 and LL/50 give truly horrible results (resulting in your high difference), but that's not what i'm doing

This is an interesting and important point which I don't recall having been elucidated previously (my apologies to the poster if it has) - the math by which both blend modes work does in fact impart a difference at the high and low ends when using modified opacity settings.  Of course, the way I look at it, it's just more reason to not use the HP filter (see below).

PANZERWOLF wrote:
in my tif file you can see the HP layer (just HP, no alterations) on LL, 100% opacity,
and on top of it the curves layer that lowers the contrast
in 8bit, this is (at least) as accurate as your method
in 16bit, i find yours to be a tad more accurate in most cases, but i guess for both these are mere rounding errors, not by far the extent shown in your difference example

This appears to be due in large part to the example and radii which you are employing.  Differences are very small until you really push the radii, at which time larger ones become apparent.  Using your methodology, I had considerably more issue with difference from the original, particularly in extremely bright / dark areas of the fine detail.  I can post example files if you'd like to see them - HP + GB + Curves gave me as much as 75/256 difference; the 8bit technique at the outset of this thread showed 1/256 maximum, admittedly above what is posted there (I'll go back to edit).

PANZERWOLF wrote:
am i correct that your initial HP bashing and he figure 2670/32k was based on an incorrect HP method, or am i still missing something?

2670/32k was on the basis of a curve-corrected HP filter utilization IIRC, but again all differences are heavily impacted by radii employed - I'd have gotten much more or less depending on the radius I'd chosen.  One of the advantages of the techniques outlined is that their maximum 'damage' to a file is independent of the radius used.

Oct 11 09 02:08 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Intro
Just a note to let everyone know that, in a completely heterosexual way, I could kiss Panzerwolf right now -- I believe he has unlocked the manner in which the HP filter works, and why it is broken (*hint hint* Adobe, this is a quick code fix).  The results I'll explain are not exact to the HP filter's output, but I'll try to explain that as well.

Methods
Want to mimic its errors?  Try the following in any bit depth:
  1.) Create a new image, pick your size (preferably large; remember we want to stress the system).
  2.) Render some 'angry clouds' (hold down Alt while choosing Filter->Render->Clouds).
  3.) Duplicate thrice.
  4.) On the bottom-most layer, run GB @ something ridiculously high.
  5.) On the middle layer, run the HP filter at the same radius.
  6.) Select the topmost layer and choose Apply Image.  Select Subtract blend mode, not inverted, the GB'd layer as Source, Scale of 1, offset of 128.

Issues
Setting the topmost layer to a blend mode of Difference results in seeing an image which has a near-constant offset of 64/32k from the HP filter's output.  I believe this has to do with the filter's ability to 'cheat' relative to what we're allowed in the Apply Image dialog.  That is, the Offset which we define when running the Subtract operation is imperfect - the proper midpoint between 0 and 255 is actually 127.5, not 128; but as we're limited to integers, it's what we're stuck with while it is not (.5/256 = 64/32k).  One can correct for this if one likes by using a 1-pt curve adjustment set at 50% blend, but it won't overcome the highlight & shadow problems inherent to using a scalar of 1 in the separation stage (see below).

Conclusions (updated)
Should the HP filter code be revised to utilize a Scale of 2 when applying the GB data to its output, it would be as accurate as any technique listed herein with the obvious advantage of being a one-pass solution (+ being able to be integrated with a Smart Object / Filter workflow).  In fact, given the lossy nature of the current implementation, an ideal solution moving forward would be for the default HP filter operation to use the '2' Scalar (I don't believe it could lose any but rounding data in this mode), as well as a 'Maximize Contrast' checkbox which calculates and utilizes the minimum scalar to avoid clipping, allowing those who want to use HP solely for LCE a best-possible solution. As well, it would be nice if Adobe would open up non-integer input for Offsets in the Apply Image dialog to us, as it can obviously be handled by the back end.

The issues previously discussed with apparent oddities with the Add and Subtract blend modes has to do with the inversions involved.  That is, while one intuitively might think that Adding the Inverse of something is the same as Subtracting it, the Inverse operation in PS inverts around 127.5 vs. around 0, resulting in the discussed / observed offset.  To anyone whom I've previously mislead on this count, my apologies.

Lastly, for anyone who continues to read the resultant discussion of these realizations, you'll see that there are still issues when using the HP filter in 8-bit editing - it is still lossy.  The best techniques I know for retention of accurate data are still those within the outset of this thread; hopefully we'll find something better before long.

Huge props to Panzerwolf for bringing up the Curves / clipping issues and his invaluable input into this.

Oct 11 09 02:55 pm Link

Photographer

PANZERWOLF

Posts: 68

Vienna, Wien, Austria

Sean Baker wrote:
This appears to be due in large part to the example and radii which you are employing.  Differences are very small until you really push the radii, at which time larger ones become apparent.  Using your methodology, I had considerably more issue with difference from the original, particularly in extremely bright / dark areas of the fine detail.

ahh, i was trying radii from 1-4 only, i'll have to go further ; )
and regarding the extremely bright / dark areas, i guess the HP filter just overshoots on high contrast edges, with the resulting layer blowing highlights and blacks, which, of course, cannot be retrieved with curves, while the subtraction avoids this in the first place, since the scale brings it down to 1/2 during the computing

but i think i just found a solution: reducing the contrast of the original layer before applying the HP filter (not on the gauss layer of course) brings the same 1/2 contrast HP result, but without the overshooting damage

i just tried it on a harsh black/white grid with radius 20, my 1st HP method produced horrible edge/gap shadows (surely the same thing that you experienced), but the 2nd worked pretty well!

could you please post an example where you were getting problems with the HP method? the original layer for me to experiment on? would be cool if we worked on the same file ; )
a crop of a problem area is enough of course

damn, i like this thread = )

Oct 11 09 05:33 pm Link

Photographer

Mask Photo

Posts: 1453

Fremont, California, US

So this method seems to open up a lot of possibilities, and I'm eternally grateful to all who have contributed. I *just* finished editing out really bad complexions from 6 images and I'm sure this would have helped immensely.

Right now, I'm trying to figure out how to shoehorn this into my current workflow without losing the flexibility I've built for myself.
Perhaps i can run down my current layering scheme, and then an idea of how to implement this without a whole paradigm shift. I'll say in advance that all your wonderful replies [will be] appreciated.

To start off, one of my big priorities is nondestructive editing, so for the most part, you'll see that I leave my base image intact.
so here's my current workflow:
* Copy background layer, rename to "original" and move to top (for ease of comparison to edited version)
* USM background with low value and large radius, for local contrast boost (can't do this with a layer, i gather)
* Liquify/perspective adjustments (too annoying to do this on multiple layers)
* Copy background twice. On one copy, run edge detection for a mask of the details. USM slightly for the barest of capture sharpening.
* Invert mask for the other layer, for a mask of the non-detail areas. Run noiseninja, auto-profiled (I found that NN would remove details in high-frequency areas)
* New layer group above the 3 BG layers for edits (one empty layer at least for healing and other retouching (such as removing clothing labels, tourists' heads, etc)
* New layer group above that one for color and tone adjustments, including general contrast and saturation adjustment layers, and a 50% gray dodge/burn layer

I have not as yet done anything for creative sharpening. I figured I'd be running some kind of process on a merged layer, and placing a throwaway layer on top of everything just before export.

My layers look like this (top to bottom):
Original image (hidden and locked)
+ Adjustment layer group
   * 50% gray D/B
   ** any other adjustment layers
+ Edit layer group
   * Edit layer for healing
   ** any other retouch layers
Capture sharpen (masked for only detail areas)
Noise reduction (masked for only no-detail areas)
Background (liquified and USM for local contrast)


I love the idea of separating the frequency layers, but am afraid of not being able to easily go back to re-edit certain things (i know one respondent mentioned doing macro editing and then flattening and then splitting frequencies. but what if i realize later that i've botched the editing job? too hard to go back and re-do it if it's not preserved.)
I was kind of thinking that the following layer structure might bring me closer to what I'm imagining. While I'm trying it out, perhaps someone could peek at it and point out any glaring issues i might not have seen?

* Copy background layer, rename to "original" and move to top
* USM background with low value and large radius, for local contrast boost
* Liquify/perspective adjustments
* Split background layer into high/low frequency. Clip curves to HF for slight capture sharpen.
* New layer group above the 2 frequency layers for edits
* New layer group above that one for color and tone adjustments

The thing I'm having trouble is that it seems people are healing directly on the frequency layers, which is a destructive form of editing. I'm thinking it might work to stick each frequency layer in its own layer group, with an empty healing layer on top (in the group), set the group to the proper blend mode, and then heal each frequency layer independently and nondestructively.
Then any big edits can be done on a separate editing layer group above all the frequency layers, using "current and below" for cloning. This will permanently preserve any capture sharpening previously done, but it's better than not being able to adjust edits at all.

I'm currently wrestling with a few things considering this workflow:
1) there doesn't seem to be a chance to run noiseninja. Are we talking about using the HF layer(s) for all noise elimination, or should I run it prior to splitting the layers?
2) I hear people talking about using it for sharpening; are they just duplicating the HF layer and adjusting the opacity, or is there something that I'm missing?
3) if sharpening is done by duplicating HF layers, then would an effective method for creative sharpening be to just generate a "throwaway" HF layer after all editing is done, put it on top of everything, and mask it to taste?


Sorry for the treatise, but i tend to get tunnel-vision when I'm experimenting, and don't really want to paradigm-shift unless I have a fair idea of where I'm going.
Any feedback would be very appreciated.
K

Oct 11 09 06:39 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

PANZERWOLF wrote:
ahh, i was trying radii from 1-4 only, i'll have to go further ; )
and regarding the extremely bright / dark areas, i guess the HP filter just overshoots on high contrast edges, with the resulting layer blowing highlights and blacks, which, of course, cannot be retrieved with curves, while the subtraction avoids this in the first place, since the scale brings it down to 1/2 during the computing

but i think i just found a solution: reducing the contrast of the original layer before applying the HP filter (not on the gauss layer of course) brings the same 1/2 contrast HP result, but without the overshooting damage

i just tried it on a harsh black/white grid with radius 20, my 1st HP method produced horrible edge/gap shadows (surely the same thing that you experienced), but the 2nd worked pretty well!

could you please post an example where you were getting problems with the HP method? the original layer for me to experiment on? would be cool if we worked on the same file ; )
a crop of a problem area is enough of course

damn, i like this thread = )

Heading to bed, but take a look at this as an example file (~2.8MB).  I don't think using just a small crop out of an image is going to give the best comparative result, simply for the need to use large radii to push the techniques.  Will post more tomorrow if you can take a look at it and make sure I used your method correctly.

As to HP operation, see the other post I made a bit above about that; I really think your comments on subtraction and curves were on the money, as it perfectly describes the filter's (aberrant) result.

EDIT: Looking even closer, I'm going to refine my statements to effect that you're right about base contrast being key to this and / or a way of getting around the HP limitations.  Indeed the 'correct' HP results are >128 levels wide, explaining the clipping observed both with Subtract (Scale 1) and the HP filter.

Oct 11 09 07:38 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Mask Photo wrote:
I'm currently wrestling with a few things considering this workflow:
1) there doesn't seem to be a chance to run noiseninja. Are we talking about using the HF layer(s) for all noise elimination, or should I run it prior to splitting the layers?
2) I hear people talking about using it for sharpening; are they just duplicating the HF layer and adjusting the opacity, or is there something that I'm missing?
3) if sharpening is done by duplicating HF layers, then would an effective method for creative sharpening be to just generate a "throwaway" HF layer after all editing is done, put it on top of everything, and mask it to taste?

As to healing - using a separate layer can work, but I don't think it's going to do so well if the blend mode for the HF data is anything but Normal.  I'd be happy to be wrong about that (and may well be), but iirc, it won't quite grasp what you're trying to do.
Numerically,
1.) NN can be run on a separate copy and the noise removed to another layer just as is done for the HF / LF data - it's the same type of procedure and has the same 'lossless' nature to it.
2.) Generally, yes.  Some just separate and then run an S curve against the HF; others use it in duplicate.
3.) I think masking is a good idea, at least for creative sharpening, but not everyone will agree with this as it's all a matter of taste at that point.

Oct 11 09 07:42 pm Link

Photographer

Mask Photo

Posts: 1453

Fremont, California, US

Sean Baker wrote:
As to healing - using a separate layer can work, but I don't think it's going to do so well if the blend mode for the HF data is anything but Normal.  I'd be happy to be wrong about that (and may well be), but iirc, it won't quite grasp what you're trying to do.

It seems to work pretty well if the HF layer, the edit layer, and any clipped curves are all in a layer group with its blend mode set to linear light; observe my partial benjamin button treatment on my dad:
http://maskphoto.com/files/hf_test.psd (18MB)
(just horsing around with this file; there's no way i'd erase his rich lines of character for a portfolio piece)
The trick is to turn OFF the blending mode while healing, so you can't really see the effects (but you can see what the texture looks like still). You could also leave it on, and target a portion of "good" skin on the HF layer with which to paint on the HF retouch layer, but it would be a pain to retarget another area if you needed to change texture.

1.) NN can be run on a separate copy and the noise removed to another layer just as is done for the HF / LF data - it's the same type of procedure and has the same 'lossless' nature to it.

Ah, so if I understand you correctly, "apply image" can be used to split any kind of before/after procedure into two layers? So the procedure would be to run NN on one layer, then apply a duplicate layer to it, from which we'll get a layer with all the noise removed, and a delta layer with the noise put back in? From that point, masking the unwanted noise out is the only remaining step?

2.) Generally, yes.  Some just separate and then run an S curve against the HF; others use it in duplicate.

Is there a blend mode during apply image that would tend to amplify the effects, or is linear light the only way to go about this procedure?


Continuing on, I've read a little about a double-blur method. is that just to smooth out unevenness? and if so, wouldn't that cause problems in areas that aren't skin? I realize this thread has been dominated by skin-related discussions, but I've also seen coastlines and other non-people samples.
Was the secondary blurring the gist of the "bandstop" method mentioned by someone elsewhere, or was that something else?
(one would think that having a mathematics degree would grant me some insight into some of these topics, but that would be incorrect)

Thanks so much for your time on this method.

Oct 12 09 01:48 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Mask Photo wrote:
Ah, so if I understand you correctly, "apply image" can be used to split any kind of before/after procedure into two layers? So the procedure would be to run NN on one layer, then apply a duplicate layer to it, from which we'll get a layer with all the noise removed, and a delta layer with the noise put back in? From that point, masking the unwanted noise out is the only remaining step?

Correct, though one can do a lot of other things with the principles involved as well - sharpening without noise enhancement as one among them - particularly useful for 'capture' sharpening w/ deconvolution.

Mask Photo wrote:
Is there a blend mode during apply image that would tend to amplify the effects, or is linear light the only way to go about this procedure?

You can play with it and you'll find that there are ways to amplify in a single step, but that's a bit beyond the scope of the thread - the original intent here was recreation of the image in multiple parts without losing any data.  Any / all of the amplification methods results in loss of fidelity with the original.

Mask Photo wrote:
Continuing on, I've read a little about a double-blur method. is that just to smooth out unevenness? and if so, wouldn't that cause problems in areas that aren't skin? I realize this thread has been dominated by skin-related discussions, but I've also seen coastlines and other non-people samples.
Was the secondary blurring the gist of the "bandstop" method mentioned by someone elsewhere, or was that something else?
(one would think that having a mathematics degree would grant me some insight into some of these topics, but that would be incorrect)

The double-blurring is almost always being applied to skin regions, either through manual selection or through masking of the result.  But yes, that is the basis of the visual bandstop which has been discussed.

Oct 12 09 03:19 am Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Sean Baker wrote:

My gestalt is that the issue with the script is coming from the fact that each separation isn't being done from the same base image, as well that the LF layer isn't retained under it, making perfect recreation of the original impossible.  The unfortunate truth of GB is that it isn't a perfect frequency separation, but only an approximation.  Hence, running a 4px GB on itself repeatedly will continue to change the base image.  Equally, running multiply on separate copies will not only result in keeping extra copies of the very fine details, but will also result in different 'versions' or 'interpretations' of the intermediate frequencies.  The mathematical issue with layer ordering I suspect has to do with the way we minimize the fine details' numerical values in the technique, leading to their being averaged away by the larger structures in recombination.  Frustrating to be sure, but I'm not sure there's a better way with the available blend modes and their implementation - maybe something else to hope for in CS5?

I'm finding the same thing through experimentation of different ways to do this. I've tried separating from a GB of the same base image at different radii and additive GB to the same base. Each method has its pros and cons. And I think you hit on the main drawback to multiple separations and that being the available blend modes. This is what led me to try the clipped curves and opacity adjustments.

This just gives me more to play with. However, I'm definitely not knocking the potential that this separation method has nor the real world uses as it is. And I'm having fun figuring out what else I can do with it. smile

Oct 12 09 06:43 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

On real-time skin work:

Spawning off of what Panzerwolf started above, I've gone back to look more closely at what's going on with the High Pass filter proper and how we might exploit the new insights into its operation.  As stated, its method of applying the change can't handle a full-contrast scene as the range involved is sometimes too great for it to record.  I was hesitant to use PW's Curves techniques as they were imperfect for halving the contrast and recreating the scene perfectly (or at least to the standard previously set), despite countless attempts at changing curve values and playing with all manner of permutations, opacities, etc.  It should be noted that I don't believe this is the result of his efforts, but of limitations inherent to PS itself.  Curves didn't work; blending with 50% gray didn't work; blending with shifted grays didn't work (creating a 127.5 gray); but the Legacy Brightness / Contrast adjustment works.  roll

I know, I'm shaking my head over that one too, but let's use it:

1.) Take your image on which you want to do some skin work, creating a copy on which you'd like to work.
2.) Use Image->Adjustments->Brightness / Contrast; check the Legacy box; enter -50 in the Contrast box; 0 Brightness.
3.) Invert the layer.
4.) Convert the layer to a Smart Object, set to Linear Light.
5.) Run High Pass first, selecting your upper pixel limit for elimination.
6.) Run GB next, selecting the lower limit of elimination / upper limit of preservation in the detail.
7.) Tune to taste, rasterizing if you so choose for speed / size, mask, etc.

That Brightness / Contrast gives us a better result than any Curve or gray blend raises a lot of questions, but those will have to wait for another day.  In the mean time, you can do realtime skin bandstopping without significant loss of pixel-level quality (2/32k - maybe not forensic standards, but I'll wager better than your eyesight tongue).

Note: If you're not sure wth I'm talking about, please read  this within the thread where the mechanics for how these techniques apply to skin and are first demonstrated then explained.

Oct 12 09 12:41 pm Link

Photographer

Julian Marsalis

Posts: 1191

Austin, Texas, US

Sean Baker wrote:
On real-time skin work:

Spawning off of what Panzerwolf started above, I've gone back to look more closely at what's going on with the High Pass filter proper and how we might exploit the new insights into its operation.  As stated, its method of applying the change can't handle a full-contrast scene as the range involved is sometimes too great for it to record.  I was hesitant to use PW's Curves techniques as they were imperfect for halving the contrast and recreating the scene perfectly (or at least to the standard previously set), despite countless attempts at changing curve values and playing with all manner of permutations, opacities, etc.  It should be noted that I don't believe this is the result of his efforts, but of limitations inherent to PS itself.  Curves didn't work; blending with 50% gray didn't work; blending with shifted grays didn't work (creating a 127.5 gray); but the Legacy Brightness / Contrast adjustment works.  roll

I know, I'm shaking my head over that one too, but let's use it:

1.) Take your image on which you want to do some skin work, creating a copy on which you'd like to work.
2.) Use Image->Adjustments->Brightness / Contrast; check the Legacy box; enter -50 in the Contrast box; 0 Brightness.
3.) Invert the layer.
4.) Convert the layer to a Smart Object.
5.) Run High Pass first, selecting your upper pixel limit for elimination.
6.) Run GB next, selecting the lower limit of elimination / upper limit of preservation in the detail.
7.) Tune to taste, rasterizing if you so choose for speed / size.

That Brightness / Contrast gives us a better result than any Curve or gray blend raises a lot of questions, but those will have to wait for another day.  In the mean time, you can do realtime skin bandstopping without loss of pixel-level quality (yes Virginia, you do retain 1/32k accuracy) big_smile.

Where's the action lol wink

Oct 12 09 01:11 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Julian Marsalis wrote:

Where's the action lol wink

On the right side of your screen, in the actions palette.  It's the round button tongue wink.

Oct 12 09 01:17 pm Link

Photographer

Ken Marcus Studios

Posts: 9420

Las Vegas, Nevada, US

5.) Run High Pass first, selecting your upper pixel limit for elimination.
6.) Run GB next, selecting the lower limit of elimination / upper limit of preservation in the detail.


- - - - - - -

How do I know what the upper pixel limit is ?   any recommendation to start with?

What is GB and what is the lower limit of elimination/uper limit ?

I'm sure your technique is good . . . but you don't make it very clear as to how to do it


KM

Oct 12 09 01:28 pm Link

Photographer

Escalante

Posts: 5367

Chicago, Illinois, US

I'm not sure if it has been asked yet or not but,
How is it effecting (if at all ) the image for enlargements ,noise wise?
And would you apply this before or after the enlargement was made.
I hope it isn't an Irrelevant question ,
And thanks again for the ground work here Sean and everyone else.
Learning how to improve one's art using art of science shouldn't be seen as if
one is removing the joy from it, one is merely enhancing the work to increase their pleasure.

Yada yada yada

Thanks again

E

Oct 12 09 01:35 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Ken Marcus Studios wrote:
How do I know what the upper pixel limit is ?   any recommendation to start with?

What is GB and what is the lower limit of elimination/uper limit ?

I'm sure your technique is good . . . but you don't make it very clear as to how to do it


KM

The post is really meant as a continuation / derivative of a technique first mentioned in the thread by grahamsz here, wherein the premise is to eliminate a certain frequency range of image information corresponding to problem areas in skin (or whatever you happen to be editing).  That I didn't make that more clear was a mistake - sorry about that.

The upper pixel limit referenced will depend entirely on the relative area of skin in the image, as well the image's dimensions.  For high-MP headshots, this might be in the range of 25-50+, but again will depend on the issues to be overcome.

GB = Gaussian Blur.  The lower limit would be the smallest-sized detail which you would want to remain in the image - generally around 1.5-3x the diameter of a pore in the image, though this will again vary by taste and the image used.  The real value here is that you can play with these values as you go to see the impact they have on the base image.

Oct 12 09 01:37 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

ESCALANTE wrote:
I'm not sure if it has been asked yet or not but,
How is it effecting (if at all ) the image for enlargements ,noise wise?
And would you apply this before or after the enlargement was made.

Noise shouldn't be impact by the separation technique itself, though certainly sharpening based on enhancing the HF data will equally impact the noise (itself being high frequency).  If enlarging for print (I gather that's what you mean?) I would use this technique to sharpen after doing so, assuming that you're not meaning to output to a RIP, in which case I wouldn't resize at all.

Let me know if that makes sense; lots of pots on the stove right now.

Oct 12 09 01:44 pm Link

Photographer

Escalante

Posts: 5367

Chicago, Illinois, US

Sean Baker wrote:
Noise shouldn't be impact by the separation technique itself, though certainly sharpening based on enhancing the HF data will equally impact the noise (itself being high frequency).  If enlarging for print (I gather that's what you mean?) I would use this technique to sharpen after doing so, assuming that you're not meaning to output to a RIP, in which case I wouldn't resize at all.

Let me know if that makes sense; lots of pots on the stove right now.

It sorta does, I thought so  too , as well on the applying it AFTER the enlargement is done. less artifacts to corrupt once it is enlarged (that make sense?)
I do mean printing , Ive had very little issue with it but on images with a higher ISO where the noise is a bit more 'annoying' (to put it lightly) .
Also is there a huge difference on 8bit -16 bit image enlargements .
Again I apologise if the question has been handled already or n/a.

Oct 12 09 01:55 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

ESCALANTE wrote:
It sorta does, I thought so  too , as well on the applying it AFTER the enlargement is done. less artifacts to corrupt once it is enlarged (that make sense?)

It does make sense, but the sharpening will function differently in such case, hence recommendation to do it at the final size.

ESCALANTE wrote:
I do mean printing , Ive had very little issue with it but on images with a higher ISO where the noise is a bit more 'annoying' (to put it lightly) .

If you're having trouble with the noise interacting with the sharpening algorithms, you can always try working on a separate, noise-reduced copy in order to generate your sharpening layer, which should afford you the best of both worlds.

ESCALANTE wrote:
Also is there a huge difference on 8bit -16 bit image enlargements .

Is there a technical one?  Yes, absolutely.  Is it one you're likely to notice from any but the best & best calibrated printers?  No.

Oct 12 09 02:00 pm Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Sean Baker wrote:
On real-time skin work:

Spawning off of what Panzerwolf started above, I've gone back to look more closely at what's going on with the High Pass filter proper and how we might exploit the new insights into its operation.  As stated, its method of applying the change can't handle a full-contrast scene as the range involved is sometimes too great for it to record.  I was hesitant to use PW's Curves techniques as they were imperfect for halving the contrast and recreating the scene perfectly (or at least to the standard previously set), despite countless attempts at changing curve values and playing with all manner of permutations, opacities, etc.  It should be noted that I don't believe this is the result of his efforts, but of limitations inherent to PS itself.  Curves didn't work; blending with 50% gray didn't work; blending with shifted grays didn't work (creating a 127.5 gray); but the Legacy Brightness / Contrast adjustment works.  roll

I know, I'm shaking my head over that one too, but let's use it:

1.) Take your image on which you want to do some skin work, creating a copy on which you'd like to work.
2.) Use Image->Adjustments->Brightness / Contrast; check the Legacy box; enter -50 in the Contrast box; 0 Brightness.
3.) Invert the layer.
4.) Convert the layer to a Smart Object.
5.) Run High Pass first, selecting your upper pixel limit for elimination.
6.) Run GB next, selecting the lower limit of elimination / upper limit of preservation in the detail.
7.) Tune to taste, rasterizing if you so choose for speed / size, mask, etc.

That Brightness / Contrast gives us a better result than any Curve or gray blend raises a lot of questions, but those will have to wait for another day.  In the mean time, you can do realtime skin bandstopping without loss of pixel-level quality (yes Virginia, you do retain 1/32k accuracy) big_smile.

Note: If you're not sure wth I'm talking about, please read  this within the thread where the mechanics for how these techniques apply to skin and are first demonstrated then explained.

*SIGH* Does this mean I need to write a new script and/or action set? tongue

I'm always up for it, though. big_smile Just probably not today. I'll have to play with this method a bit first to see how it acts first-hand.

Oct 12 09 03:34 pm Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Sean Baker wrote:
On real-time skin work:

Spawning off of what Panzerwolf started above, I've gone back to look more closely at what's going on with the High Pass filter proper and how we might exploit the new insights into its operation.  As stated, its method of applying the change can't handle a full-contrast scene as the range involved is sometimes too great for it to record.  I was hesitant to use PW's Curves techniques as they were imperfect for halving the contrast and recreating the scene perfectly (or at least to the standard previously set), despite countless attempts at changing curve values and playing with all manner of permutations, opacities, etc.  It should be noted that I don't believe this is the result of his efforts, but of limitations inherent to PS itself.  Curves didn't work; blending with 50% gray didn't work; blending with shifted grays didn't work (creating a 127.5 gray); but the Legacy Brightness / Contrast adjustment works.  roll

I know, I'm shaking my head over that one too, but let's use it:

1.) Take your image on which you want to do some skin work, creating a copy on which you'd like to work.
2.) Use Image->Adjustments->Brightness / Contrast; check the Legacy box; enter -50 in the Contrast box; 0 Brightness.
3.) Invert the layer.
4.) Convert the layer to a Smart Object.
5.) Run High Pass first, selecting your upper pixel limit for elimination.
6.) Run GB next, selecting the lower limit of elimination / upper limit of preservation in the detail.
7.) Tune to taste, rasterizing if you so choose for speed / size, mask, etc.

That Brightness / Contrast gives us a better result than any Curve or gray blend raises a lot of questions, but those will have to wait for another day.  In the mean time, you can do realtime skin bandstopping without loss of pixel-level quality (yes Virginia, you do retain 1/32k accuracy) big_smile.

Note: If you're not sure wth I'm talking about, please read  this within the thread where the mechanics for how these techniques apply to skin and are first demonstrated then explained.

This uses Linear Light blend mode on that new Smart Object layer, correct?

Oct 12 09 03:56 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Photons 2 Pixels Images wrote:

This uses Linear Light blend mode on that new Smart Object layer, correct?

Good point.  Yes.

Oct 12 09 03:57 pm Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Photons 2 Pixels Images wrote:

*SIGH* Does this mean I need to write a new script and/or action set? tongue

I'm always up for it, though. big_smile Just probably not today. I'll have to play with this method a bit first to see how it acts first-hand.

OK. I lied. I got a quick action set together for this.

http://www.nunuvyer.biz/Photoshop/Actio … ration.atn

I hope I got this right. smile

Oct 12 09 04:22 pm Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

Photons 2 Pixels Images wrote:

OK. I lied. I got a quick action set together for this.

http://www.nunuvyer.biz/Photoshop/Actio … ration.atn

I hope I got this right. smile

Looks perfect to me - save that there's no mask, my paintbrush isn't selected, I don't have white chosen, etc. tongue [teasing]

Oct 12 09 04:28 pm Link

Photographer

Photons 2 Pixels Images

Posts: 17011

Berwick, Pennsylvania, US

Sean Baker wrote:

Looks perfect to me - save that there's no mask, my paintbrush isn't selected, I don't have white chosen, etc. tongue [teasing]

OK, OK...updated. tongue I shoulda known better anyway, huh?

Oct 12 09 05:12 pm Link

Photographer

PANZERWOLF

Posts: 68

Vienna, Wien, Austria

[edit] deleted since i've overread sean's brightness/contrast trick instead of my curves ...
and i was just about to let go of the HP ... wink

... but not being able to recreate legacy B/C with curves is pretty lame, do you hear that adobe? sad
and in 8bit, the HP still creates moiré patterns, even with B/C

Oct 13 09 04:33 am Link

Photographer

Sean Baker Photo

Posts: 8044

San Antonio, Texas, US

PANZERWOLF wrote:
and in 8bit, the HP still creates moiré patterns, even with B/C

What type image are you seeing moire effects in?  I'd like to see if we can't work that one out, as obviously the 1/2 contrast thing requires old tools too smile.

Oct 13 09 08:47 am Link

Photographer

PANZERWOLF

Posts: 68

Vienna, Wien, Austria

Sean Baker wrote:
What type image are you seeing moire effects in?  I'd like to see if we can't work that one out, as obviously the 1/2 contrast thing requires old tools too smile.

your soccer guy image, radius 25
in 8bit it's actually worse with the contrast/brightness method
although it's not a real problem, since it's pretty low, with a maximum difference of about (1/2/1), but when amplified you can see moiré patterns that i haven't seen with the apply image methods

here's a comparison
left side is normal difference, right side is amplified with the steepest possible curve

https://www.panzerwolf.com/statics/frequency_r25_8bit_comp.jpg

Oct 13 09 12:15 pm Link

Photographer

Robert Randall

Posts: 13890

Chicago, Illinois, US

PANZERWOLF wrote:

your soccer guy image, radius 25
in 8bit it's actually worse with the contrast/brightness method
although it's not a real problem, since it's pretty low, with a maximum difference of about (1/2/1), but when amplified you can see moiré patterns that i haven't seen with the apply image methods

here's a comparison
left side is normal difference, right side is amplified with the steepest possible curve

https://www.panzerwolf.com/statics/frequency_r25_8bit_comp.jpg

Is it Moire', or is it edge detail of a posterized image?

Oct 13 09 12:18 pm Link