Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using postprocessing as a layer manager (akin to photoshop layers)? #229

Closed
trusktr opened this issue Oct 7, 2020 · 6 comments
Closed
Labels
discussion An open discussion or announcement feature request New feature request

Comments

@trusktr
Copy link

trusktr commented Oct 7, 2020

Is your feature request related to a problem?

Can we use postprocessing to render a different set of objects, one set per RenderPass?

Describe the solution you'd like

It's not obvious from the docs, but what I'd like to do is render some objects on a bottom "layer" (think like Photoshop layers, where the bottom layer is at the top of the list). Then I'd like to render some objects on a top layer (this layer is second in the list if you imagine Photoshop sidebar with layers stacked vertically).

Even if the objects would normally intersect, they won't, because they render in separate layers.

Describe alternatives you've considered

Basically, I want to do the same as the following demo that uses two WebGLRenderers, but I'm wondering if I can do it with postprocessing with multiple RenderPasses (or something?) and only one WebGLRenderer:

https://codepen.io/trusktr/pen/c38ac3666a0c04b1ae444378f740d294?editors=0010

The demo intentionally uses a scene with one object in order to render as if there are two objects, and although for the second layer the object is further behind in 3D space, it renders on top intentionally.

Additional context

I tried to do this with EffectComposer and RenderPass, but there doesn't seem to be a documented API for drawing in between passes. The following doesn't work, but shows the intent:

https://codepen.io/trusktr/pen/da24853ae3880464009e2660280df5de?editors=0010

Did I miss it in the docs? Maybe there can be an official way to do it.

@trusktr trusktr added the feature request New feature request label Oct 7, 2020
@trusktr
Copy link
Author

trusktr commented Oct 7, 2020

I'm close! What am I missing?

https://codepen.io/trusktr/pen/26349f0af773cdef298c90cfc9e27412?editors=0010

My goal is that I can apply postprocessing to the combination of the layers. Seems like multiple RenderPasses followed by other effects would be the way to do that.

Perhaps another way is to render to separate renderers like I did (or maybe separate render targets), then finally combine the textures into one and then apply effects.

I was also curious if layers would be easier with postprocessing, than as separate scenes. Ultimately I'd like to see what's the easiest for multiple visual layers, plus effects on any combination of layers.

@trusktr
Copy link
Author

trusktr commented Oct 7, 2020

Looks like the best way to make layers (assuming I didn't miss something here with postprocessing), is to do it like described here: https://discourse.threejs.org/t/multiple-scenes-vs-layers/12503

renderer.autoClear = false;

renderer.clear()
renderer.render(scene, camera);
renderer.clearDepth()
// ... modify scene ...
renderer.render(scene, camera);
renderer.clearDepth()
// ...

but with that approach, it isn't clear how one would apply effects to the individual "layers". (EDIT: I suppose that could be updated to render each scene to a WebGLRenderTarget, then apply an EffectComposer to each WebGLRenderTarget as needed.)

It seems that if the "layering" can be done all with postprocessing RenderPasses (instead of multiple renderers or multiple renderer.render calls with autoClear false), then applying effects to each layer would be an completely simple additional step.

@vanruesc
Copy link
Member

vanruesc commented Oct 8, 2020

Hi,

there's currently no abstraction similar to photoshop-like layers in postprocessing. You can, however, save intermediate render results with the SavePass, apply effects and then blend the textures into a final image using the TextureEffect wrapped in an EffectPass. You don't need to do that for your use case, though.

Take a look at this example: https://codesandbox.io/s/gracious-kepler-ktfve?file=/src/App.js

In the example above, I'm using a custom LambdaPass to execute simple functions (similar to onBeforeRender or onAfterRender hooks for passes). Meshes are added to different render layers via Selections and the camera render layers are adjusted prior to each render pass. This allows us to avoid using multiple scenes with duplicated lights. The second RenderPass only clears depth to render the second box on top of everything.

Solving more advanced compositing problems sometimes requires knowledge about the inner workings of the EffectComposer. The current pipeline is flexible enough to get most image compositions done, but it's not exactly easy to use and the details about the internal buffer management are undocumented since they are supposed to be private despite being relevant. This is a design flaw and the system is error-prone (#225).

I'm planning on addressing this by introducing explicit input/output buffer management for passes. (Related: mrdoob/three.js#12115 (comment), mrdoob/three.js#10981)

@trusktr
Copy link
Author

trusktr commented Oct 9, 2020

Interesting demo. Thanks!

This allows us to avoid using multiple scenes with duplicated lights

What happens if I put the same light in multiple scenes? Will it re-compile shaders for each one or something?

Here's a demo where I've used multiple scenes with a VisualLayers class to manage each one as a layer:

https://codepen.io/trusktr/pen/OJXPXxo?editors=1010

I see WebGLRenderer tracks scene everywhere, so maybe I could update VisualLayers to use one scene and instead swap root objects in the single scene, so that renderer only tracks one scene (if there's any problem with re-compiling shaders (I haven't checked yet) maybe then that'd solve it).

I like that VisualLayers makes it easier to re-arrange layers and not limited to 32 layers, compared to camera.layers (or does Selections uplift those limitations?).


Looks like the RenderPass in three.js has a clearDepth property since r83, which I seems to achieve the same as ClearPass in your example.

But now I see how to use ClearPass, which I didn't realize before. Thanks for the help! I will play around with that and post an updated pen here in case it helps someone.

@vanruesc
Copy link
Member

vanruesc commented Oct 9, 2020

Thanks for your input!

What happens if I put the same light in multiple scenes?

You can't. You'd have to clone them.

Will it re-compile shaders for each one or something?

Shader re-compilations happen when shader preprocessor macros are changed; for example when lights are added or removed dynamically at runtime, fog is disabled/enabled on the fly, outputEncoding is changed, major things.

It's also possible for a material to get re-compiled when an object is moved from one scene to another even if the lights are seemingly the same.

use one scene and instead swap root objects in the single scene

I think that's generally a good approach for switching between different subscenes. You can have one main scene and organize subscenes in Groups.

does Selections uplift those limitations

No, Selections use three's render layer system. They just make it easier to track which objects belong to which layer. Bitwise operations such as fast layer mask checks are limited to 32 bit integers in JavaScript. In my opinion, this limitation should never become a problem as 32 layers are more than enough for the vast majority of use cases. These layers are also not intended to be used for photoshop-like layer management. It's better to implement something like the VisualLayers class or a custom RenderPass for that.

Layers in Photoshop are images that are blended pixel-by-pixel using blend functions. This is already supported in postprocessing: you can render scenes into different textures which can be likened to image layers and then mix them together as explained in my previous post. The planned explicit frame buffer management API will also make this easier.

The layer system you proposed is mainly resetting depth for specific groups of objects. The layer order translates directly to the order of render calls. This problem can be solved with render layers as shown earlier. Rendering up to or even more than 32 groups of objects per frame with intermediate depth clear operations seems unfeasible to me because the CPU overhead could easily get out of hand depending on the complexity of the scene.

This library provides a foundation for realtime image filters. I don't think it should provide an alternative to render layers. I also plan on changing postprocessing into a plugin which involves moving away from wrapping renderer.render calls.

@vanruesc
Copy link
Member

vanruesc commented Nov 2, 2020

Closing due to inactivity. If you wish to discuss this further, feel free to reopen.

@vanruesc vanruesc closed this as completed Nov 2, 2020
@vanruesc vanruesc added the discussion An open discussion or announcement label Nov 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion An open discussion or announcement feature request New feature request
Projects
None yet
Development

No branches or pull requests

2 participants