-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipeline on GCP fails with "Error: pipeline dependencies not found" #224
Comments
Looked at two files but can't find any helpful information for debugging.
Can you upgrade Caper (which includes Cromwell version upgrade 52->59) and try again? Please follow upgrade instruction on Caper's release note.
|
I'll give that a try and report back. Thanks, Jin! |
I've upgraded Caper/Cromwell (and verified the version update). Running on the same 25 jobs, I still get the exact same errors, and the Caper server crashes. I then tried running just one job only. Intriguingly, it succeeded! So that suggests to me that either a subset of the jobs are crashing and causing the entire Caper server to crash and take the other jobs with them, or simply having too many jobs at a time is causing troubles... Very strange! Any ideas? In the meantime, I'm going to try running a few more jobs on their own and see how that goes... |
How did you run the server? Did you use Caper's shell script to make a server instance? |
I started the server using this command in a tmux session:
|
That command line looks good if your Google user account settings have enough permission to GCE, GCS and Google Life Sciences API and on on. Why don't use a configuration file
BTW I strongly recommend to use the above shell script because ENCODE DCC runs thousands of pipeline without any problem on the instance created by that shell script. Not sure if you have a service account with correct permissions settings. Please use the above script. |
I generated the default configuration file using I've attached cromwell.out.txt |
It looks like Java memory issue?
Thanks why I recommend the shell script. That script will make an instance with enough memory and all caper settings are automatically configured. |
Ah, I'm sorry. I misunderstood which script you were referring to. I'll try to create an instance using |
Describe the bug
I've submitted a good number (25) of ChIP-seq jobs to Caper, and the jobs begin running, but somehow halfway through, the Caper server dies suddenly. Examining the logs and grepping for "error", I find that all of the job logs (in
cromwell-workflow-logs/
) contain "Error: pipeline dependencies not found".I have consulted Issue #172, but I have verified that I have activated the
encode-chip-seq-pipeline
einvironment both when launching the Caper server and when submitting the jobs. I am also experiencing these issues on GCP, and not on MacOS, so I felt it was prudent to create a new issue for this.OS/Platform
Caper configuration file
Input JSON file
Here, I'm showing one of the 25 jobs submitted.
Troubleshooting result
Unfortunately, because the Caper server dies, I am unable to use
caper troubleshoot {jobID}
to diagnose.Instead, I've attached the cromwell log for the job. The end of this log is:
I've also attached
cromwell.out
.workflow.3d1cb136-9b32-4514-9a33-3262d8303d6f.log
cromwell.out
Thanks!
The text was updated successfully, but these errors were encountered: