Photographer
D P P I X
Posts: 597
Lathrop, California, US
hi guys, I just thought of a way while retouching a photo to possibly make it more efficient. I'm in the middle of this retouch so i can't try it out yet but I wanted to run it by some of you now before I give it a try and hopefully get some feedback. I'm a huge fan of the non destructive editing (I came from the world of video editing and FCP and AE) and it's the one thing that I've sorely missed in working on photoshop. Luckily, photoshop now has those fancy smart layers which I use frequently. Right now my process is to duplicate the background layer and convert to smart layer then change the blend mode and add a high pass to sharpen (i know, i've read the thread on why high pass sucks and the solution but it's not non destructive so I'm sticking to this for now). After that I add a blank layer and use the healing brush tool to clean up any blemishes on the face. then I add one more blank layer and stamp visible then convert that to a smart layer and duplicate for the skin smoothing and skin texture. Well, my question, can I just convert the background layer to a smart layer and duplicate it a few times? that way i can do all my retouching and blemish removal stuff on the smart layer and it will update all the subsequent smart layers in the main file?
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
Why are you duplicating the image layer (whether smart object or otherwise) to do healing, etc.? You can do that on blank layers and keep file size down considerably. My rule of thumb: Don't duplicate the image layer unless it is required for some operation (such as liquify).
Photographer
George Lue
Posts: 8235
Orlando, Florida, US
You can heal and clone on layers above using "All layers" option in the options palette. Newer versions have a "Current and Below" option as well. As far as converting to a smart object? I don't really know if it's necessary to do what you're describing. You can use smart filters on smart objects, however, rendering that high pass filter as a clumsier option, when you have smart sharpen and unsharp mask.
Photographer
D P P I X
Posts: 597
Lathrop, California, US
Peano wrote: Why are you duplicating the image layer (whether smart object or otherwise) to do healing, etc.? You can do that on blank layers and keep file size down considerably. My rule of thumb: Don't duplicate the image layer unless it is required for some operation (such as liquify). i am doing my healing on a blank layer, thought I said that... Current layer structure: duplicate of smart object below, skin texture blank layer, stamp visible, convert to smart object, skin smoothing blank layer for healing background duplicated, converted to smart layer, sharpen Background What I'm thinking of doing is as follows: duplicated background for skin texture duplicated background for skin smoothing duplicated background for sharpen background converted to smart object and then the original smart object will be the background plus the blank layer for healing. this way ALL layers are working off of the one healed smart layer. The reason I'm working with smart objects in the first place is because some of my steps require two layers and I like being able to tweak either of them as I'm working on the file rather then do the filter once and hope that it's the right settings.
Retoucher
9stitches
Posts: 476
Los Angeles, California, US
D P P I X wrote: i am doing my healing on a blank layer, thought I said that... You did. It has potential, and would keep file size down vs. duplicate pixel layers. I can see at least that high pass layer benefiting from this technique. Many times I've already stamped a copy and sharpened, only to see some little thing I'd like to change underneath. Without Smart Objects, I'd have to either back up or do it on a new blank layer. My one concern is that Smart Objects seem to eat a lot of RAM. Still, the idea warrants further exploration. Thanks for the food-for-thought.
Photographer
D P P I X
Posts: 597
Lathrop, California, US
ezpkns retouching wrote:
You did. It has potential, and would keep file size down vs. duplicate pixel layers. I can see at least that high pass layer benefiting from this technique. Many times I've already stamped a copy and sharpened, only to see some little thing I'd like to change underneath. Without Smart Objects, I'd have to either back up or do it on a new blank layer. My one concern is that Smart Objects seem to eat a lot of RAM. Still, the idea warrants further exploration. Thanks for the food-for-thought. that's actually what got me thinking about this idea. I was doing my skin smoothing/texture when I'd noticed that I'd missed a spot on the retouching and double clicked the smart object and edited that copy and saved. Got me to thinking, why don't I just do that from the beginning? Right now my skin technique involved two layers and three filters and I like being able to adjust each of the filters when all of the layers and filters are in place to try and get the best possible effect. the same goes for the sharpen layer. I have basic numbers and settings that get me in the ball park but every photo is slightly different and I feel compelled to tweak.
Retoucher
Mistletoe
Posts: 414
London, England, United Kingdom
Well, my question, can I just convert the background layer to a smart layer and duplicate it a few times? that way i can do all my retouching and blemish removal stuff on the smart layer and it will update all the subsequent smart layers in the main file? Yes it can work like that. Any changes to a single Smart Object will show up in all the others, if you have them linked in this way. There are two ways to duplicate Smart Objects in Photoshop - one is using the usual Command J, which will create the same object as a different 'instance'. The other way is to choose New Smart Object Via Copy- Control or right click on the Smart Object. This will create an independent object unconnected. As Peano has said, there are few reasons to duplicate entire layers, although some prefer to work this way. But there are reasons that this technique of having several layers all linked as instances can be useful - of course they can also be filtered and blended seperately. Does create very large files though
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
D P P I X wrote: What I'm thinking of doing is as follows: duplicated background for skin texture duplicated background for skin smoothing duplicated background for sharpen background converted to smart object and then the original smart object will be the background plus the blank layer for healing. this way ALL layers are working off of the one healed smart layer. I think you're needlessly multiplying file size. Why not just make the background layer a smart object, do skin work, eyes, etc., on blank layers and adjustment layers, then stamp all that at the end and sharpen.
Retoucher
9stitches
Posts: 476
Los Angeles, California, US
Peano wrote:
I think you're needlessly multiplying file size. Why not just make the background layer a smart object, do skin work, eyes, etc., on blank layers and adjustment layers, then stamp all that at the end and sharpen. But the point is, that besides auto-updating, it won't increase file size. Even the High Pass layer will take less space on disk as a parametric operation vs. new pixel layer. Plus you'd get to preview your sharpening before you were finished with underlying edits. I'd like to see how it impacts RAM-based performance. Today's not the day for random experimenting though (I've already reached my MM-time quota). And it goes without saying that this workflow demands blank layers and adjustment layers, since you can't paint/clone/etc directly on Smart Objects. I'm not likely to wildly change my workflow today, just trying to keep an open mind to the things I'll wonder how I lived without a couple years from now.
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
ezpkns retouching wrote: But the point is, that besides auto-updating, it won't increase file size. I guess I don't understand, then. This really doesn't interest me very much (because my workflow is just fine), but I don't see how you can duplicate image layers several times and not increase file size.
duplicated background for skin texture duplicated background for skin smoothing duplicated background for sharpen background converted to smart object How do you make all these duplicates of an image layer without increasing file size?
Retoucher
Mistletoe
Posts: 414
London, England, United Kingdom
It does increase the files size. Each duplicate contains not just the Smart Object itself but also a full res preview in the containing document. This is so that it can update and be filtered independently. Smart objects create very large files, like I said.
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
Oy. What nonsense. You guys can carry this thread to its destination -- wherever the hell that be -- without any more of my input.
Photographer
D P P I X
Posts: 597
Lathrop, California, US
Snap2 wrote: It does increase the files size. Each duplicate contains not just the Smart Object itself but also a full res preview in the containing document. This is so that it can update and be filtered independently. Smart objects create very large files, like I said. the smart object though is only referenced right, i mean if I have 300 smart object layers in the list I'd only have 1 real image and the 300 references. unless my logic is backwards here it wouldn't create a file 300 times in size would it? i think I'm going to go experiment here in a second. If that's the case maybe I'll change my work flow, but if it's the way I am thinking it is then i may adopt this new work flow. The way I'm seeing it now is I'd have the one background photo which is the smart object and the blemish removal would be in that object, which would be referenced a number of times for the filters. Those layers would have of course the rendered preview, which would be the same as doing a stamp visible and then applying said filter right? So in the end we'd have the same number of layers?
Photographer
Robert Randall
Posts: 13890
Chicago, Illinois, US
Peano wrote: Oy. What nonsense. You guys can carry this thread to its destination -- wherever the hell that be -- without any more of my input. What was it about any of the previous posts that would upset you so much?
Retoucher
9stitches
Posts: 476
Los Angeles, California, US
D P P I X wrote:
the smart object though is only referenced right, i mean if I have 300 smart object layers in the list I'd only have 1 real image and the 300 references. unless my logic is backwards here it wouldn't create a file 300 times in size would it? i think I'm going to go experiment here in a second. If that's the case maybe I'll change my work flow, but if it's the way I am thinking it is then i may adopt this new work flow. The way I'm seeing it now is I'd have the one background photo which is the smart object and the blemish removal would be in that object, which would be referenced a number of times for the filters. Those layers would have of course the rendered preview, which would be the same as doing a stamp visible and then applying said filter right? So in the end we'd have the same number of layers? I was also under the impression that duplicated smart layers only reference the file, not add it again (which seems inefficient, but perhaps there's some reason it must be this way. This can easily be tested, but it's time for me to quit Photoshop and play Lego Indiana Jones with my daughter, because, after all, that's what computers are for...
Photographer
doctorontop
Posts: 429
La Condamine, La Condamine, Monaco
I think the answer here is a yes and a no. Creating a Smart object will result in a larger file size however repeat instances with no filters attached seems to carry roughly the same overhead as a standard duplicate layer holding the same image data. However accurately measuring memory overhead here is difficult because PS stores data to where ever it is allowed to address via buffering and virtual paging. Memory bandwidth comes into play here with the OS loading non associated data blocks and often overriding application priority placement requests. Applications see only logical pointers and that is not the same as a physical address itself along with a cache system accepting multi thread processor read/write requests and a page management system all of these processes are controlled directly by the OS and not the application itself so used memory locations maybe and often are fragmented. The memory measurement tools will only show the current lagging percentage allocation by the OS not the actual memory used. So without a testing kernel measuring code efficiency accurately is not really achievable. Edit bring back MS-DOS .....
Retoucher
9stitches
Posts: 476
Los Angeles, California, US
doctorontop wrote: I think the answer here is a yes and a no. Creating a Smart object will result in a larger file size however repeat instances with no filters attached seems to carry roughly the same overhead as a standard duplicate layer holding the same image data. However accurately measuring memory overhead here is difficult because PS stores data to where ever it is allowed to address via buffering and virtual paging. Memory bandwidth comes into play here with the OS loading non associated data blocks and often overriding application priority placement requests. Applications see only logical pointers and that is not the same as a physical address itself along with a cache system accepting multi thread processor read/write requests and a page management system all of these processes are controlled directly by the OS and not the application itself so used memory locations maybe and often are fragmented. The memory measurement tools will only show the current lagging percentage allocation by the OS not the actual memory used. So without a testing kernel measuring code efficiency accurately is not really achievable. Edit bring back MS-DOS ..... Hence my lack of enthusiasm for changing workflow - yet. I "feel" a memory hit when using a lot of smart objects, but can't measure, except by trial and error. But if I had the time to measure by trial and error, I wouldn't be so concerned about performance!
Photographer
doctorontop
Posts: 429
La Condamine, La Condamine, Monaco
ezpkns retouching wrote: Hence my lack of enthusiasm for changing workflow - yet. I "feel" a memory hit when using a lot of smart objects, but can't measure, except by trial and error. But if I had the time to measure by trial and error, I wouldn't be so concerned about performance! I agree and feel the same memory hit yet I have no proof to back that up. Scott Kelby and Martin Evening have both stated that their are memory efficiency gains when using Smart Objects and under certain parameters I am sure they are correct. However with any large multi event driven system their will always be trade-offs and compromises each with their own exceptions and caveats. It should be noted Adobe have stayed very quiet on the subject as I could not find any adobe tech notes relating to Smart Objects and system level efficiency . I think a more targeted question should be Do Smart Objects benefit my workflow ? My answer to that would have to be yes as I find them to be very useful. I will continue to use them as I believe they bring us another step closer to the non-destructive editing mantra. The days of disk space being at a premium are no longer with us. Larger faster volumes are coming to market all the time. The one area that whilst improving seems to still need development is the speed of RAM with ever faster processors memory read/write operations seem to be a current bottle neck. But now with major advances in optical RAM chips the main stumbling block to optical computing seemingly overcome by the Australians and IBM already having invested years into optical processors the link to the Nature article below releases for me the inner geek within....... http://www.nature.com/nature/journal/v4 … 08325.html The performance versus functionality argument will continue to rage and we as users will just have to make value judgments for ourselves.
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
ezpkns retouching wrote: But the point is, that besides auto-updating, it won't increase file size. D P P I X wrote: the smart object though is only referenced right, i mean if I have 300 smart object layers in the list I'd only have 1 real image and the 300 references. unless my logic is backwards here it wouldn't create a file 300 times in size would it? ezpkns retouching wrote: I was also under the impression that duplicated smart layers only reference the file, not add it again (which seems inefficient, but perhaps there's some reason it must be this way. This can easily be tested, but ... These are smart objects. Do the same with rasterized layers and you'll see precisely the same file sizes.
Retoucher
9stitches
Posts: 476
Los Angeles, California, US
Peano wrote: These are smart objects. Do the same with rasterized layers and you'll see precisely the same file sizes.
Peano comes through with the reality check (animated, no less). I think a lot of this discussion has been unfortunately based not on how smart objects work in real life, but how we wish they would work based on the hype surrounding their introduction (and as the good doctor pointed out, not just from Adobe). They have their place, but they're still not ready for prime time.
Photographer
doctorontop
Posts: 429
La Condamine, La Condamine, Monaco
Peano wrote:
ezpkns retouching wrote: But the point is, that besides auto-updating, it won't increase file size. D P P I X wrote: the smart object though is only referenced right, i mean if I have 300 smart object layers in the list I'd only have 1 real image and the 300 references. unless my logic is backwards here it wouldn't create a file 300 times in size would it? These are smart objects. Do the same with rasterized layers and you'll see precisely the same file sizes.
Peano I have 2 psd files linked here: http://www.mediafire.com/file/zmir3zrw3em/2 test only sm.psd http://www.mediafire.com/file/drmywmzo3fk/4 test only n.psd Test 2 is 3 layers converted to a smart object and saved file size 8.69M Test 4 is just the three layers no smart object File size 6.47M I have not uploaded but I saved both files out as tiffs Test 2 = 11.5M Test 4 = 9.41M In all cases I reloaded the files into Ps after they had saved and the file size displayed was the same as shown in the screen capture below.
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
ezpkns retouching wrote: They have their place, but they're still not ready for prime time. I think they're quite ready for prime time. Example: I have ACR set to open images in Photoshop as smart objects. That allows me to return the background layer to ACR and make further adjustments at any time. If I make a new smart object via copy, I can open that in ACR and make a separate exposure to blend with the background layer. When I run filters such as Portraiture, unsharp mask, and shadows/highlights, I always run them on smart objects so I can reopen them if needed; those image layers function just like adjustment layers. Smart objects are extremely useful ... if you know understand how they work and know how to use them.
Retoucher
9stitches
Posts: 476
Los Angeles, California, US
I only meant they're not set to replace pixel layers. Still a bit clumsy, and with a lot of overhead. Their analogous to After Effects' nested compositions, which can be duplicated at will without adding to file size, and don't affect performance. I believe we'll see further improvements to them. The PS team seems to believe in them, but they must know that there are still roadblocks to more mainstream adoption.
Retoucher
Peano
Posts: 4106
Lynchburg, Virginia, US
ezpkns retouching wrote: I only meant they're not set to replace pixel layers. They are pixel layers.
Still a bit clumsy, and with a lot of overhead. Clumsy how? And what "overhead"? They're no bigger than rasterized layers.
I believe we'll see further improvements to them. We'll see further improvements in everything in Photoshop.
there are still roadblocks to more mainstream adoption. What roadblocks? I use smart objects constantly, and I haven't noticed any roadblocks. I'll say again, if you understand how they work and how to use them, they are quite useful. When you don't understand how they work, they can seem "clumsy." That's true of any tool, though. Some are thankful that the cup is half full. Others complain that the cup is half empty. Such is life.
|