Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assistance with errors executing on GCP #22

Open
phlatphish opened this issue Nov 13, 2024 · 7 comments
Open

Assistance with errors executing on GCP #22

phlatphish opened this issue Nov 13, 2024 · 7 comments
Assignees

Comments

@phlatphish
Copy link

New to GCP and ElasticBlast, quite experienced with blast and the command line. Your learning resources are very good. I am still on the free tier at GCP -- could that be the origin of the following errors? I am attempting blastp on a set of 12000 proteins against nr. Running a single protein against swissprot went without problems.

my.ini:

[cloud-provider]
gcp-region = us-west1
gcp-zone = us-west1-b

[cluster]
num-nodes = 4
labels = owner=phlatphish
#Uncomment next line if error "Requested disk size 3000.0G is larger than allowed." occurs.
pd-size = 400G

[blast]
program = blastp
db = nr
queries = myproteins.fa
results = gs://elasticblast-phlatphish/try3 
options = -task blastp-fast -evalue 1e-6 -outfmt 6

The errors:

elastic-blast submit --cfg try3.ini
ERROR: The command "gcloud container clusters create elasticblast-phlatphish-0ad9ece41 --no-enable-autoupgrade --project pen --zone us-west1-b --machine-type n1-highmem-64 --num-nodes 1 --scopes compute-rw,storage-rw,cloud-platform,logging-write,monitoring-write --labels cluster-name=elasticblast-phlatphish-0ad9ece41,client-hostname=cs-951428556382-default,project=elastic-blast,billingcode=elastic-blast,creator=phlatphish,created=2024-11-13-19-57-21,owner=phlatphish,program=blastp,db=nr,name=elasticblast-phlatphish-0ad9ece41,results=gs---elasticblast-phlatphish-try3,version=1-3-1" returned with exit code 1
Note: The Kubelet readonly port (10255) is now deprecated. Please update your workloads to use the recommended alternatives. See https://cloud.google.com/kubernetes-engine/docs/how-to/disable-kubelet-readonly-port for ways to check usage and for migration instructions.
Note: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=
        - insufficient project quota to satisfy request: resource "CPUS_ALL_REGIONS": request requires '64.0' and is short '32.0'. project has a quota of '32.0' with '32.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=pen
        - insufficient regional quota to satisfy request: resource "CPUS": request requires '64.0' and is short '40.0'. project has a quota of '24.0' with '24.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=pen. This command is authenticated as [email protected] which is the active account specified by the [core/account] property.


ERROR: cleanup stage failed: kubernetes context is missing for elasticblast-phlatphish-0ad9ece41
ERROR: cleanup stage failed: Cluster elasticblast-phlatphish-0ad9ece41 was not found
ERROR: kubernetes context is missing for elasticblast-phlatphish-0ad9ece41
ERROR: Cluster elasticblast-phlatphish-0ad9ece41 was not found
@christiam
Copy link
Collaborator

Hi @phlatphish ,
The errors you're seeing have to do with insufficient quotas:

Also, I noticed the pd-size = 400G configuration setting. This is setting would not allow the nr database to fit in, as it currently is 546.5GB:

$ update_blastdb.pl --showall tsv --source gcp | awk -F '\t' '/^nr/ {print $3}' 
546.5033

I hope this helps!

@phlatphish
Copy link
Author

Thank you for getting back. I'll investigate those quotas. I much appreciate this resource and your help.

@phlatphish
Copy link
Author

Some progress made! I overcame the CPU limits but now received this error:

ERROR: Requested disk size 536.871G is larger than allowed (500.0G) for region us-west2
Please adjust parameter [cluster] pd-size to less than 500.0G, run your request in another region, or
request a disk quota increase (see https://cloud.google.com/compute/quotas)

Which of these solutions do you recommend? I could not find a quick way to find out what regions allow > 500G. When investigating quotas I could not find one called pd-size.

@christiam
Copy link
Collaborator

Hi @phlatphish , if you'd like to search nr, you will need to request a disk quota increase. One way to check the quotas for your GCP project is to visit https://console.cloud.google.com/iam-admin/quotas and filter the listing via "Persistent Disk Standard".
For information on how to request a quota increase, please see https://cloud.google.com/docs/quotas/view-manage#requesting_higher_quota
I hope this helps!

@christiam christiam self-assigned this Nov 26, 2024
@phlatphish
Copy link
Author

My apologies. I still have not got this to run.

I get this error regardless of what I put in the config line pd-size = xxx

ERROR: Requested disk size 3221.225G is larger than allowed (500.0G) for region us-west2
Please adjust parameter [cluster] pd-size to less than 500.0G, run your request in another region, or
request a disk quota increase (see https://cloud.google.com/compute/quotas)

My US west 2 disk quotas look like this:

image

, which are not changeable, and look like they are big enough.

@christiam
Copy link
Collaborator

Hi @phlatphish , can you please provide the information in https://blast.ncbi.nlm.nih.gov/doc/elastic-blast/support.html so we can help you better? TIA!

@phlatphish
Copy link
Author

Got it. Will get that info together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants