Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot pass items implementing blade_graphics::traits to blade_egui #249

Open
EriKWDev opened this issue Jan 12, 2025 · 3 comments
Open

Comments

@EriKWDev
Copy link
Contributor

EriKWDev commented Jan 12, 2025

in blade_egui, the paint function expects a pass: &mut blade_graphics::RenderCommandEncoder since it needs to internally call .with.

I am in some parts of my code using the blade_grahpics::traits which can have a RenderEncoder and RenderPipelineEncoder. However, due to the with function not being part of the traits (or blade's common api), it is impossible to use the egui renderer with those since it is impossible to refactor the paint function's signature to instead take in a trait since everything isn't covered.

I still think the .with function and its return type should be standardized. Let it always specify both pipeline's and the encoder's lifetime as is required in the vulkan implementation, and other implementations can use phantom data for the lifetimes.

@kvark
Copy link
Owner

kvark commented Jan 13, 2025

Could you provide a piece of problematic code, to better understand the problem?

@EriKWDev
Copy link
Contributor Author

EriKWDev commented Jan 15, 2025

Sorry, coded away on other parts for some days!

The core faulty design decision I made was I had assumed I could have a utility function return a impl gpu::traits::RenderEncoder, and still have the caller choose a new pipeline, but turned out I could just return the concrete type of RenderCommandEncoder so actually turned out simpler. Nice!

We have a blit-utility function and when blitting the final off-screen buffer to the frameview with postprocessing, we can do the GUI in the same renderpass. So, the function returns the RenderCommandEncoder to facilitate this usecase.

let options = BlitOptions {
    name: "blit with postprocess + gui",
    from_view: state.render_buffers.resolved_color_view(),
    from_format: state.render_buffers.settings.formats.color,
    from_sample_count: 1,
    to_view: frame_view,
    to_format: surface_info.format,
    postprocess: true,
   ..Default::default()
};
if let (Some(mut rp), None) = blit_with_options(&state.context, encoder, options, &mut state.pipelines, &mut state.samplers) {
  state.gui_painter.paint(
      &mut rp,
      &egui_tesselation_result,
      &state.screen_desc,
      &state.context,
  );
}

Sidenote/fun: details on a neat utility function in our renderer for blitting

We have a little function that handles the implementation-details of blits so the rest of the render code can be expressed a lot more neatly. When doing pure copying with matching sizes and formats the renderer expects the user to just do transfers, but often 'copying' one texture to another can involve many nuances that all somewhat still fit together in the shader:

Some need to go from msaa to non-msaa, some need to go from depth buffers to r-float buffers, some need to set a custom Z-value and thus require no fragment shader, some need downsampling using compute (we have no 'Max' samplers), srgb to non srgb, vice versa, postprocessing...

#[profiling::function]
pub fn blit_with_options<'a>(
    context: &gpu::Context,
    encoder: &'a mut gpu::CommandEncoder,

    // encodes the textureviews, formats and settings..
    options: BlitOptions,

    // caches pipelines by hashing options and resolving #ifdefs..
    pips: &mut ImmediatePipelines,

    // caches samplers by hashing options...
    samplers: &mut ImmediateSamplers,
) -> (
    Option<gpu::RenderCommandEncoder<'a>>,
    Option<gpu::ComputeCommandEncoder<'a>>,
) {
    // compile a shader permutation specific for blitting with our options (resolves #ifdefs and other stuff)
    // re-uses pipelines if options requested previously based on hash
    let (compute_pipeline, render_pipeline) = pips.get_blit_pipeline_for_options(options);

    if options.needs_compute() {
        let mut cp = encoder.compute(options.name);
        let pip = compute_pipeline.unwrap();
        let mut c = cp.with(pip);

        // set the right uniforms...
        options.bind_shader_data(&mut c, 0, samplers);

        let wg = pip.get_workgroup_size();
        let groups = options.num_dispatch_groups(wg);
        c.dispatch(groups);

        (None, Some(cp))
    } else {
        let pip = render_pipeline.unwrap();

        // create a RenderTargetSet with appropriate targets given 'BlitOptions'
        // (has to be macro due to slice lifetimes)
        let set = create_blit_render_target_set! { options };

        let mut rp = encoder.render(options.name, set);
        let pip = render_pipeline.unwrap();
        let mut r = rp.with(pip);

        if let Some(sc) = options.scissor {
            r.set_scissor_rect(&sc);
        }
        if let Some(vp) = options.viewport {
            r.set_viewport(&vp);
        }

        // set the right uniforms/samplers/textures...
        options.bind_shader_data(&mut r, 0, samplers);

        // draw a full-screen triangle, vertices calculated from vertex_id
        r.draw(0, 3, 0, 1);

        (Some(rp), None)
    }
}

and a snippet from just the blit fragment shader to show why wgsl doesn't completely allow us to express such a dynamic usecase..

@fragment
fn fs_blit_simple(vertex: BlitOutput) -> @location(0) vec4<f32> {
    return sample_input_at_uv(vertex.uv);
}

@fragment
fn fs_blit_postprocess(vertex: BlitOutput) -> @location(0) vec4<f32> {
    let color_in = sample_input_at_uv(vertex.uv);

    // TODO: More postprocessing, tonemapping..

#ifdef SRGB_TARGET
    //
    // NOTE: rendering to a SRGB target causes automatic linear->gamma conversion,
    //       so no need to do it in shader in that case
    // 
    var color_out = color_in.rgb;
#else
    var color_out = gamma_from_linear_rgb(color_in.rgb);
#endif

    return vec4<f32>(color_out, color_in.a);
}


var blit_input_sampler: sampler;
#ifdef INPUT_MSAA
#ifdef INPUT_DEPTH
var blit_input_texture: texture_depth_multisampled_2d;
#endif
#ifndef INPUT_DEPTH
var blit_input_texture: texture_multisampled_2d<f32>;
#endif
#else
#ifdef INPUT_DEPTH
var blit_input_texture: texture_depth_2d;
#endif
#ifndef INPUT_DEPTH
var blit_input_texture: texture_2d<f32>;
#endif
#endif


fn sample_input_at_uv(uv: vec2<f32>) -> vec4<f32> {
#ifdef INPUT_MSAA
#ifdef INPUT_DEPTH
    var avg = 0.0;
#endif
#ifndef INPUT_DEPTH
var avg = vec4<f32>(0.0);
#endif
    let texel = uv_to_texel(uv, textureDimensions(blit_input_texture));
    for (var sample = 0u; sample < num_samples.x; sample++) {
        //
        // NOTE: Why would I be allowed to write this.. Way to nice.. xD
        // 
        //           " Image sample or level-of-detail index's type of [14] is not an integer scalar "
        // 
        // avg += textureLoad(blit_input_texture, texel, sample);    

        // wgsl wants this beauty...
        switch(sample) {
            case 0u: { avg += textureLoad(blit_input_texture, texel, 0); }
            case 1u: { avg += textureLoad(blit_input_texture, texel, 1); }
            case 2u: { avg += textureLoad(blit_input_texture, texel, 2); }
            case 3u: { avg += textureLoad(blit_input_texture, texel, 3); }
            case 4u: { avg += textureLoad(blit_input_texture, texel, 4); }
            case 5u: { avg += textureLoad(blit_input_texture, texel, 5); }
            case 6u: { avg += textureLoad(blit_input_texture, texel, 6); }
            case 7u: { avg += textureLoad(blit_input_texture, texel, 7); }
            default: {
                avg += textureLoad(blit_input_texture, texel, 0);
            }
        }
    }
    avg /= f32(num_samples.x);

#ifdef INPUT_DEPTH
    return vec4f(avg, 0.0, 0.0, 0.0);
#endif
#ifndef INPUT_DEPTH
    return avg;
#endif

#else
#ifdef INPUT_DEPTH
    let r = textureSample(blit_input_texture, blit_input_sampler, uv);
    return vec4f(r, 0.0, 0.0, 0.0);
#endif
#ifndef INPUT_DEPTH
    return textureSample(blit_input_texture, blit_input_sampler, uv);
#endif
#endif
}

We have a compute shader as well handling a textureGather with max()/min() of each component for blitting depth buffer for occlusion culling as well

From a trait-perspective I still think encapsulating the with would be cleanest, but code shouldn't be from a trait-perspective unless it has to xP So this turned out nicer

@kvark
Copy link
Owner

kvark commented Jan 19, 2025

Ok, so it sounds like there is no blocker/issue today. We can leave this issue open as a suggestion to expose with in the trait. Could we adjust the title accordingly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants