-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
delegate_to: "{{ pve_node }}" fails with Ansible version 2.7.6. #1
Comments
Apologies if this is a silly question, but do you have ssh access to the "pve_node"? In my example I used "pve1" so that would be one of your proxmox servers. Appears you're using "root" so you would want to ensure you have ssh access to your "pve_node" server as root (via ssh keys, or specify "-k" to your ansible-playbook command to prompt for an ssh password). Those tasks go out to the Proxmox server and run that "pct" command to get info on the containers. I've had some ideas for how to make this more useful and have been meaning to work on it for a while. I'll try to add some more error handling. Appreciate the feedback, my first GitHub issue 👍 |
Hi Noe,
Thanks for replying so fast!
I’ve just ran the playbook with the -k option as suggested but had no better luck:
root@Ansible /etc/ansible# ansible-playbook -k -i ./hosts.ini create-cloud-storage-controller.yml
SSH password:
PLAY [monchy] *********************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************
ok: [10.20.204.55]
TASK [engonzal.proxmox : Provision ct cloud-storage-controller2] ******************************************************************************************************************
changed: [10.20.204.55 -> localhost]
TASK [engonzal.proxmox : Set vmid var] ********************************************************************************************************************************************
ok: [10.20.204.55]
TASK [engonzal.proxmox : Get CT110 config] ****************************************************************************************************************************************
fatal: [10.20.204.55]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host pve port 22: Connection timed out\r\n", "unreachable": true}
to retry, use: --limit @/etc/ansible/create-cloud-storage-controller.retry
PLAY RECAP ************************************************************************************************************************************************************************
10.20.204.55 : ok=3 changed=1 unreachable=1 failed=0
I do have ssh access to the « pve_node » named pve within the remote Proxmox environment, but do we agree that pve_node expects the server name (the host in Proxmox Datacenter) ? Because when I say I have ssh access to it it’s via the ip as in:
ssh [email protected]
Does it make sense to you ? Your role works fine for me as long as I comment the 2 lines mentioned but I would appreciate to find out what is not ok with these commands on my side. I’ll keep digging…
(by the way this is your first GitHub issue but I’d like to thank you for this great work ! It saves me a lot of hard work)
Christophe
… Le 9 mars 2019 à 17:41, Noe Gonzalez ***@***.***> a écrit :
Apologies if this is a silly question, but do you have ssh access to the "pve_node"? In my example I used "pve1" so that would be one of your proxmox servers. Appears you're using "root" so you would want to ensure you have ssh access to your "pve_node" server as root (via ssh keys, or specify "-k" to your ansible-playbook command to prompt for an ssh password).
Those tasks go out to the Proxmox server and run that "pct" command to get info on the containers.
I've had some ideas for how to make this more useful and have been meaning to work on it for a while. I'll try to add some more error handling. Appreciate the feedback, my first GitHub issue 👍
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#1 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AF1v05i8VwyfgkUrZ5Nxiyea5PRUCADUks5vU-QcgaJpZM4bmmja>.
|
So for example, I have an ansible inventory file with the new container hostname:
Then in the playbook, pve_node is a server that is part of your proxmox cluster. So i have 5 nodes in mine.. pve1,pve2,pve3... I'll just use the first node "pve1". I think we're on the same page that it would be part of the same datacenter. Looking at this I guess pve_node and pve_api_host could be the same value, so maybe redundant. pve_hostname is the new container hostname, so should be the same as what's in your inventory file (ie "new_containter_hostname"). So the playbook would look like:
That said I'll do some more testing and see if I can replicate your issue, realized that I've made some local changes that I haven't pushed up, so maybe that's why im not seeing the same thing.. 🙃 |
I’m not sure I’m following you with the proposed inventory in your example…
# hosts
[testgroup] <- this would be my containers’ group
new_containter_hostname <- this would be the container I wish to create
And then for the playbook:
# playbook.yml
---
- hosts: testgroup <- NO, I would here need to address the target pve, not the container (Am I correct?)
connection: local
user: root
vars:
So in my case:
# hosts
[testgroup]
new_container_hostname
[pve_nodes]
10.20.204.55 <- the IP for {{ pve_node }} (Am I correct ?)
And so:
# playbook.yml
—
- hosts: pve_nodes
connection: local
user: root
vars:
pve_node: pve
pve_apiuser: root@pam
Do you agree with the above settings ?
… Then in the playbook, pve_node is a server that is part of your proxmox cluster. So i have 5 nodes in mine.. pve1,pve2,pve3... I'll just use the first node "pve1". I think we're on the same page that it would be part of the same datacenter. Looking at this I guess pve_node and pve_api_host could be the same value, so maybe redundant.
pve_hostname is the new container hostname, so should be the same as what's in your inventory file (ie "new_containter_hostname"). So the playbook would look like:
# playbook.yml
---
- hosts: testgroup
Le 9 mars 2019 à 19:22, Noe Gonzalez ***@***.***> a écrit :
So for example, I have an ansible inventory file with the new container hostname:
# hosts
[testgroup]
new_containter_hostname
Then in the playbook, pve_node is a server that is part of your proxmox cluster. So i have 5 nodes in mine.. pve1,pve2,pve3... I'll just use the first node "pve1". I think we're on the same page that it would be part of the same datacenter. Looking at this I guess pve_node and pve_api_host could be the same value, so maybe redundant.
pve_hostname is the new container hostname, so should be the same as what's in your inventory file (ie "new_containter_hostname"). So the playbook would look like:
# playbook.yml
---
- hosts: testgroup
connection: local
user: root
vars:
pve_node: pve1 # proxmox server
pve_apiuser: ***@***.***
pve_apipass: myAPIpassword
pve_api_host: pve1.domain.com
pve_hostname: "newhostname"
pve_template: local:vztmpl/debian-9.0-standard_9.5-1_amd64.tar.gz
roles:
- engonzal.proxmox
That said I'll do some more testing and see if I can replicate your issue, realized that I've made some local changes that I haven't pushed up, so maybe that's why im not seeing the same thing.. 🙃
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#1 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AF1v06GCp9naLisl-QKr1UzApNAeVFqCks5vU_vSgaJpZM4bmmja>.
|
Ahha that explains your problem maybe. So for my case I wanted the target to be the container, so I could add multiple plays in a playbook (ie first play to build the container, next play to install apache or plex or whatever app. So that might explain the errors you saw. The connection: local says run all the tasks locally (no ssh). But then I use delegate_to for the
playbook example:
|
Ok Noe, thanks for the clarification ! I’ll change my playbooks accordingly. How can we delete/change my issue ?
… Le 9 mars 2019 à 20:13, Noe Gonzalez ***@***.***> a écrit :
Ahha that explains your problem maybe. So for my case I wanted the target to be the container, so I could add multiple plays in a playbook (ie first play to build the container, next play to install apache or plex or whatever app.
So that might explain the errors you saw. The connection: local says run all the tasks locally (no ssh). But then I use delegate_to for the command: "pct set... tasks to run only those tasks on the target pve. To break it down:
# each task from tasks/main.yml
- name: Provision ct {{ pve_hostname }} < localhost >
- name: Set vmid var < localhost >
- name: Get CT..} config < pve node >
- name: Manually add bind mounts to.. < pve_node >
- name: Start CT{{ pve_vmid | default(pve_new_vmid) }} < localhost >
playbook example:
#play 1
- hosts: all
user: root
connection: local
roles:
- name: engonzal.proxmox
tags: pve
post_tasks:
- pause: minutes=1
tags: pve
# play 2
- hosts: all
user: root
# removed connection: local because we want to go directly to the new container we created
# connection: local
roles:
- name: engonzal.users
- name: engonzal.package
- name: engonzal.influxdb
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#1 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AF1v08Q6BaDWe4aQysOg4EZVER8_ZX9Oks5vVAfTgaJpZM4bmmja>.
|
Running a playbook with role engonzal.proxmox to create a container on remote Proxmox node fails with cannot establish connection msg.
ansible --version
Commenting line 38 and line 48 in ansible_role_proxmox/tasks/main.yml seems to fix this issue for me.
Maybe this is not a bug but a configuration issue on my side but have not yet found where...
The text was updated successfully, but these errors were encountered: