Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] tensor.hpp must be included before mma_atom.hpp #1254

Closed
mammoth831 opened this issue Dec 7, 2023 · 9 comments
Closed

[BUG] tensor.hpp must be included before mma_atom.hpp #1254

mammoth831 opened this issue Dec 7, 2023 · 9 comments
Labels
bug Something isn't working

Comments

@mammoth831
Copy link

mammoth831 commented Dec 7, 2023

Describe the bug
I find that <cute/tensor.hpp> must be included before <cute/atom/mma_atom.hpp>, otherwise we cannot compile.

Steps/Code to reproduce bug

Compile with nvcc test.cu -I include/ -std=c++17

#include <cute/atom/mma_atom.hpp>
// or we can only include <cute/atom/mma_atom.hpp> here
#include <cute/tensor.hpp>

int main() {}

the output errors:

include/cute/algorithm/gemm.hpp(83): error: "gemm" has already been declared in the current scope
  gemm(MMA_Atom<MMA> const& mma,
  ^

include/cute/algorithm/gemm.hpp(81): warning #1835-D: attribute "always_inline" does not apply here
  __inline__ __attribute__((always_inline)) __attribute__((host)) __attribute__((device))
                            ^

Remark: The warnings can be suppressed with "-diag-suppress <warning-number>"

include/cute/algorithm/gemm.hpp(81): warning #1835-D: attribute "__host__" does not apply here
  __inline__ __attribute__((always_inline)) __attribute__((host)) __attribute__((device))
                                                           ^

include/cute/algorithm/gemm.hpp(83): error: incomplete type is not allowed
  gemm(MMA_Atom<MMA> const& mma,
  ^

include/cute/algorithm/gemm.hpp(83): error: identifier "MMA_Atom" is undefined
  gemm(MMA_Atom<MMA> const& mma,
       ^

include/cute/algorithm/gemm.hpp(83): error: type name is not allowed
  gemm(MMA_Atom<MMA> const& mma,
                ^

include/cute/algorithm/gemm.hpp(83): error: expected an expression
  gemm(MMA_Atom<MMA> const& mma,
                     ^

include/cute/algorithm/gemm.hpp(84): error: type name is not allowed
       Tensor<TA, ALayout> const& A,
       ^

include/cute/algorithm/gemm.hpp(84): error: expected a ")"
       Tensor<TA, ALayout> const& A,
                           ^

include/cute/algorithm/gemm.hpp(87): error: expected a ";"
  {
  ^

At end of source: error: expected a ";"

At end of source: error: expected a "}"
include/cute/algorithm/gemm.hpp(59): note #3196-D: to match this "{"
  {
  ^

10 errors detected in the compilation of "test.cu".

Exchange the two header files and we can compile normally

Expected behavior

It seems mma_atom.hpp already includes tensor.hpp and these two files are protected by #pragma once. So I think we don't need to include <cute/tensor.hpp> explicitly before <cute/atom/mma_atom.hpp>.

#include <cute/tensor.hpp>

Environment details (please complete the following information):
nvcc: V12.2.128
cutlass: a75b4ac (tag:v3.3.0)

@mammoth831 mammoth831 added ? - Needs Triage bug Something isn't working labels Dec 7, 2023
@hwu36
Copy link
Collaborator

hwu36 commented Dec 7, 2023

PR please?

@mhoemmen @thakkarV

@hwu36
Copy link
Collaborator

hwu36 commented Dec 7, 2023

@ezhulenev

@mhoemmen
Copy link
Contributor

mhoemmen commented Dec 7, 2023

It seems mma_atom.hpp already include[s] tensor.hpp and these two files are protected by #pragma once. So I think we don't need to ... include <cute/tensor.hpp> [explicitly] before <cute/atom/mma_atom.hpp>.

I agree, but something else might be wrong here. The point of #pragma once is to prevent multiple includes of the same header. It looks like some other header might not have correct #pragma once protection. Note how the multiple definition errors are for functions in the header file include/cute/algorithm/gemm.hpp.

@thakkarV
Copy link
Collaborator

thakkarV commented Dec 8, 2023

CC @ccecka

@andylolu2
Copy link
Contributor

Same issue

@reed-lau
Copy link
Contributor

reed-lau commented Dec 23, 2023

the dependency can be summarized as the following table:

File Includes Depends Define End includes
tensor.hpp NA NA Tensor gemm.hpp
gemm.hpp tensor.hpp/mma_atom.hpp Tensor/MMA_Atom/ThrMMA gemm func NA
mma_atom.hpp tensor.hpp Tensor MMA_Atom/ThrMMA NA

when we include tensor.hpp, after the preprocess, the code as:

Tensor;
// include <gemm.hpp>
//  include <mma_atom.hpp> =>
    MMA_Atom;
    ThrMMA;
// end include <mma_atom.hpp>
gemm(MMA_Atom);
gemm(ThrMMA);

when we include mma_atom.hpp, after the preprocess, the code as:

# include <tensor.hpp> =>
Tensor;
// include <gemm.hpp> =>
// since <mma_atom.hpp> have already include by #pragma once, it will not include
// MMA_Atom;
// ThrMMA;
gemm(MMA_Atom); // fail for MMA_Atom not defined
gemm(ThrMMA);  // fail for ThrMMA not defined
MMA_Atom;
ThrMMA;

Solution1: we can simply forward declare the MMA_Atom/ThrMMA in gemm.hpp to make it compiled.
but it may hurt. Because when we update MMA_Atom/ThrMMA's template parameter we have to update the forward declaration(in seperate files).
Solution2: make sure gemm function is included after Tensor and MMA_Atom/ThrMMA:

  1. include gemm.hpp at the end of mma_atom.hpp
  2. include mma_atom.hpp at the end of tensor.hpp

Copy link

This issue has been labeled inactive-30d due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d if there is no activity in the next 60 days.

@mhoemmen
Copy link
Contributor

@reed-lau It sounds like you would prefer Solution 2; is that right?

@reed-lau
Copy link
Contributor

@reed-lau It sounds like you would prefer Solution 2; is that right?

yes.

@mnicely mnicely closed this as completed Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants