-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CORE] New compilation options #11790
Conversation
It is not fair :( |
Close as I finally managed to test the CI locally :D |
7af66a8
to
6241d19
Compare
I modified your comment because you were mentioning a random guy which username is altair |
ouch sorry |
…pendency from KratosMPICore
@rfaasse Seems that one of the problems with the nightly is due to the i don't want to merge this without having fixed all pipelines so I added a change in the As far as I can see, this problems comes because I've fixed it explicitly instantiating: template void UPwSmallStrainInterfaceElement<2, 4>::InterpolateOutputValues<array_1d<double, 3>>(std::vector<array_1d<double, 3>>& rOutput, const std::vector<array_1d<double, 3>>& GPValues);
template void UPwSmallStrainInterfaceElement<3, 6>::InterpolateOutputValues<array_1d<double, 3>>(std::vector<array_1d<double, 3>>& rOutput, const std::vector<array_1d<double, 3>>& GPValues);
template void UPwSmallStrainInterfaceElement<3, 8>::InterpolateOutputValues<array_1d<double, 3>>(std::vector<array_1d<double, 3>>& rOutput, const std::vector<array_1d<double, 3>>& GPValues);
template void UPwSmallStrainInterfaceElement<2, 4>::InterpolateOutputValues<Matrix>(std::vector<Matrix>& rOutput, const std::vector<Matrix>& GPValues);
template void UPwSmallStrainInterfaceElement<3, 6>::InterpolateOutputValues<Matrix>(std::vector<Matrix>& rOutput, const std::vector<Matrix>& GPValues);
template void UPwSmallStrainInterfaceElement<3, 8>::InterpolateOutputValues<Matrix>(std::vector<Matrix>& rOutput, const std::vector<Matrix>& GPValues); Is that ok to you? |
…train_interface_element
@philbucher Its alive! finally. @ddiezrod If you give me green light from altair and there is no other problem we can merge 👍 |
@aaunnam Can you please check if our CI passes with these changes? |
cool! |
sure np |
@philbucher, @aaunnam did you have time to look over the changes? |
A priori does not break our CI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only preliminary comments, still need a bit more time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice! Great Job
Only one small correction required
I dont like the code duplication that is required (gitlab ci is soooo much nicer for this) but thats not for this PR. Unless you have an idea/solution for it
add_app ${KRATOS_APP_DIR}/LinearSolversApplication; | ||
add_app ${KRATOS_APP_DIR}/MetisApplication; | ||
add_app ${KRATOS_APP_DIR}/TrilinosApplication; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldnt this be in dependencies?
configure_core.sh
is a bit misleading otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.. I agree, but for reasons I need those three to be compiled in the first stage no matter what, otherwise the list of dependencies and when to activate them becomes a mess. I may change the name of the script to relfect that :S
(bascially moving trilinos and linear solvers retriggers the compilation of the core no matter what, so ....)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets gooo
Clap 👏🏻
Btw is this mainly to solve the space issue?
Is it faster or slower now?
And will you also do this for the other apps?
It solves the space issue, no change in speed I am afraid, but since we use 4 cores by default maybe its faster. As for other apps, we can split further if we run out of space again, we can even create a single job for every app and compile selectively is that's what you are asking :) |
📝 Description
This adds some new options for Kratos compilation and changes how the CI performs the compilation. Some of these set of changes should ease the process to convert Kratos and apps to discoverable CMake "modules" (@mpentek).
This also allows to fix the disk problems in the CI/FullDebug
List of new compilation options:
EXCLUDE_KRATOS_CORE=[ON/OFF]
: If set toON
allows the core to be excluded from a compilation. DefaultOFF
EXCLUDE_AUTOMATIC_DEPENDENCIES=[ON/OFF]
: If set toON
allows cmake to ignore the application dependencies specified in the cmake file of the applications. DefaultOFF
List of changes made to the cmake files:
MappingMPIExtension
was renamed toMeshTrilinosExtension
, as was mainly as it relies on theEPETRA_EV
vector. Note that with this change CoSimApplication is the only a application with a pure non trilinos mpi extensionList of changes in the CI: