Skip to content

Dear Author, I would also like to know why the backward propagation for max pooling does not use the cuDNN API. #8921

@zzk2021

Description

@zzk2021
          I think so.  I would also like to know why the backward propagation for max pooling does not use the cuDNN API.

And the forward did not too? And Why is the condition in the if statement CUDNN_DISABLED? It's vary strange.

extern "C" void forward_local_avgpool_layer_gpu(maxpool_layer layer, network_state state)
{

#ifdef CUDNN_DISABLED
    if (!state.train && layer.stride == layer.size) {
        // cudnnPoolingBackward
        cudnnStatus_t maxpool_status;

        float alpha = 1, beta = 0;
        maxpool_status = cudnnPoolingForward(
            cudnn_handle(),
            layer.poolingDesc,
            &alpha,
            layer.srcTensorDesc,
            state.input,
            &beta,
            layer.dstTensorDesc,
            layer.output_gpu);

        //maxpool_status = cudnnDestroyPoolingDescriptor(poolingDesc);
        //cudnnDestroyTensorDescriptor(layer.srcTensorDesc);
        //cudnnDestroyTensorDescriptor(layer.dstTensorDesc);

    }
    else
#endif
    {
        int h = layer.out_h;
        int w = layer.out_w;
        int c = layer.out_c;

        size_t n = h*w*c*layer.batch;

        forward_local_avgpool_layer_kernel <<<cuda_gridsize(n), BLOCK, 0, get_cuda_stream() >>> (n, layer.h, layer.w, layer.c, layer.stride_x, layer.stride_y, layer.size, layer.pad, state.input, layer.output_gpu);
        CHECK_CUDA(cudaPeekAtLastError());
    }
}

Originally posted by @zzk2021 in #8302 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions