ERROR while trying to make CERNVM-FS key directories

Hi,
I tried installing CVMFS in my local ansible-galaxy server (22.01). For the installation, I followed the tutorial fron the Galaxy Training Network (Reference Data with CVMFS).
While running the ansible-playbook galaxy.yml command, I got the following error:

fatal: [192.168.123.108]: FAILED! => {“msg”: “The task includes an option with a n undefined variable. The error was: Unable to look up a name or access an attri bute in template string ({{ item.path | dirname }}).\nMake sure your variable na me does not contain invalid characters like ‘-’: expected str, bytes or os.PathL ike object, not AnsibleUndefined\n\nThe error appears to be in ‘/home/user/galaxy/roles/galaxyproject.cvmfs/tasks/keys.yml’: line 3, column 3, but may\nbe elsewh ere in the file depending on the exact syntax problem.\n\nThe offending line app ears to be:\n\n\n- name: Make CernVM-FS key directories\n ^ here\n”}

It looked like the error was in the - part of the name, which was: CerVM-FS. I changed the name to Make CernVMFS key directories, but this did not get rid or the error.

Edit: I looked at the files of a seperate previously created (earlier this year) ansible-galaxy server and saw that the key.yml file is identical to the one I used there and worked. I was wondering if this error might be caused by any recent updates.
I was also wondering if it could be caused by having no SSH key in the server. I was wondering if I needed to create an SSH key within the server or add keys from the external user who is accessing it from a remote desktop (this is how it was done in the previously created server).

Any help would be appreciated!
Thanks in advance!

Hi @cbass

Yes, the ansible_user needs to be associated with at least one SSH key as far as I know.

@jennaj Thank you for the help! I added an SSH key, but the error didn’t change. Would you have any other input on how to possibly fix this error? Thanks in advance!

This looks like something wrong with the format of the cvmfs_keys variable, can you check that? It should be defined in your group_vars.

I’d also suggest updating to the latest version (0.2.14) of the role.

1 Like

@nate I have no cvmfs_keys variable in any file in the group_vars directory, since the tutorial didn’t show that this needs to be added and a previously created galaxy server also didn’t need this for cvmfs to be installed. Could you maybe explain if I missed a part in the tutorial and/or what needs to be add for it to run properly? I also updatet the cvmfs role to the latest version (which didn’t change anything). Thanks in advance!

Ok, if you’re using the tutorial exactly as written, you’d be using galaxy_cvmfs_repos_enabled, which sets cvmfs_keys for you. One change I would recommend is to use the value true for that option:

galaxy_cvmfs_repos_enabled: true

We’ll update the training accordingly in the future.

The failing task was rewritten in the latest version of galaxyproject.galaxy, so the error message should be different. If you’re seeing the same error, then it’s probably not using the latest version for some reason?

1 Like

I tried setting it the galaxy_cvmfs_repos_enabled: true and ran the ansible-galaxy install --force -p roles -r requirements.yml command once again, should it have failed the first time, but nothing changed. I keep getting the exact same error message. I was wondering if you might have any other possible solution for fixing this problem. Thanks in advance!
Edit: I don’t know if this information is helpful, but I am using an Ubuntu 22.04 (Jammy Jellyfish) server. I thought that this version of Ubuntu was supported, but maybe this is causing the error.

Can you also try completely wiping out the cvmfs directories? apt purge cvmfs* and remove anything in /etc/cvmfs, sometimes I’ve had issues where once it was misconfigured, running the role again wasn’t sufficient

This unfortunately also didn’t change anything.
I tried running the sudo apt-get update command in case I missed an important update. This gave the following warning:

W: https://cvmrepo.web.cern.ch/cvmrepo/apt/dists/xenial-prod/Release.gpg: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.

According to this warning, it seems the error might actually be caused by using Ubuntu version 22.04, since it was reported on other forums that multiple people came across this error when updating to version 22.04. I am thinking of running the following command (inspired by solutions of others for this problem), but was wondering if this is safe to run.

sudo apt-key export 8AE45CE7 | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/cernvm.gpg

(off-topic)
huh, did not know apt-key was getting deprecated, that’s unhelpful.

APT-KEY(8)                                                      

NAME
       apt-key - Deprecated APT key management utility
...
DEPRECATION
       Except for using apt-key del in maintainer scripts, the use of apt-key is deprecated. This section shows how to replace existing use of apt-key.

       If your existing use of apt-key add looks like this:

       wget -qO- https://myrepo.example/myrepo.asc | sudo apt-key add -

       Then you can directly replace this with (though note the recommendation below):

       wget -qO- https://myrepo.example/myrepo.asc | sudo tee /etc/apt/trusted.gpg.d/myrepo.asc

       Make sure to use the "asc" extension for ASCII armored keys and the "gpg" extension for the binary OpenPGP format (also known as "GPG key public ring"). The
       binary OpenPGP format works for all apt versions, while the ASCII armored format works for apt version >= 1.4.

       Recommended: Instead of placing keys into the /etc/apt/trusted.gpg.d directory, you can place them anywhere on your filesystem by using the Signed-By option in
       your sources.list and pointing to the filename of the key. See sources.list(5) for details. Since APT 2.4, /etc/apt/keyrings is provided as the recommended
       location for keys not managed by packages. When using a deb822-style sources.list, and with apt version >= 2.4, the Signed-By option can also be used to include
       the full ASCII armored keyring directly in the sources.list without an additional file.

I guess that’ll be fixed in a newer version of Ansible that changes the behaviour around adding keys
guess not, add variables. apt_key is deprecated by mbocquet · Pull Request #37 · galaxyproject/ansible-postgresql · GitHub, we’re just all going to have to update to write files directly, neat. Love it.


(unrelated) You can export ANSIBLE_STDOUT_CALLBACK=yaml if you want better formatted / easier to read error messages, fyi. I have that in my ~/.bashrc


Ok, so I might remember this, I’ve had something like this due to the way keys are merged in older versions of ansible. Fixed the syntax for combining two lists by FokkeDijkstra · Pull Request #44 · galaxyproject/ansible-cvmfs · GitHub which I rediscovered myself Fix the other branch · galaxyproject/ansible-cvmfs@2646304 · GitHub

What version of ansible are you using?

I edited the main.yml file to contain the right syntax for combining lists, but as expected this alone didn’t change anything again. For the ansible version, I am using ansible version 6.4.0 and ansible core 2.13.5rc1.

That should be fine, I’m on 2.13.3.

Could you possibly share the complete playbook / group vars / everything that isn’t secret?

I can’t find how to link a yml file so here are the galaxy playbook (galaxy.yml), the all.yml and the galaxyservers.yml files:

galaxy.yml

---
- hosts: galaxyservers
  become: true
  become_user: root
  vars_files:
    - group_vars/secret.yml
  pre_tasks:
    - name: Install Dependencies
      package:
        name: ['acl', 'bzip2', 'git', 'make', 'python3-psycopg2', 'tar', 'virtualenv']
  roles:
    - galaxyproject.postgresql
    - role: natefoo.postgresql_objects
      become: true
      become_user: postgres
    - geerlingguy.pip
    - galaxyproject.galaxy
    - role: uchida.miniconda
      become: true
      become_user: "{{ galaxy_user.name }}"
    - galaxyproject.nginx
    - galaxyproject.tusd
    - galaxyproject.cvmfs

all.yml

# CVMFS vars
cvmfs_role: client
galaxy_cvmfs_repos_enabled: config-repo
cvmfs_quota_limit: "{{ 1024 * 5 }}"

galaxyservers.yml

---
# Python 3 support
pip_virtualenv_command: /usr/bin/python3 -m virtualenv # usegalaxy_eu.certbot, usegalaxy_eu.tiaas2, galaxyproject.galaxy
certbot_virtualenv_package_name: python3-virtualenv    # usegalaxy_eu.certbot
pip_package: python3-pip                               # geerlingguy.pip

# PostgreSQL
postgresql_objects_users:
  - name: galaxy
postgresql_objects_databases:
  - name: galaxy
    owner: galaxy
# PostgreSQL Backups
postgresql_backup_dir: /data/backups
postgresql_backup_local_dir: "{{ '~postgres' | expanduser }}/backups"

# Galaxy
galaxy_create_user: true
galaxy_separate_privileges: true
galaxy_manage_paths: true
galaxy_layout: root-dir
galaxy_root: /srv/galaxy
galaxy_user: {name: galaxy, shell: /bin/bash}
galaxy_commit_id: release_22.01
galaxy_force_checkout: true
miniconda_prefix: "{{ galaxy_tool_dependency_dir }}/_conda"
miniconda_version: 4.7.12
miniconda_manage_dependencies: false

galaxy_config:
  galaxy:
    brand: "xxx"
    admin_users: xxx
    database_connection: "postgresql:///galaxy?host=/var/run/postgresql"
    file_path: /data
    check_migrate_tools: false
    tool_data_path: "{{ galaxy_mutable_data_dir }}/tool-data"
    object_store_store_by: uuid
    id_secret: "{{ vault_id_secret }}"
    job_config_file: "{{ galaxy_config_dir }}/job_conf.xml"
    # SQL Performance
    database_engine_option_server_side_cursors: true
    slow_query_log_threshold: 5
    enable_per_request_sql_debugging: true
    # File serving Performance
    nginx_x_accel_redirect_base: /_x_accel_redirect
    # Automation / Ease of Use / User-facing features
    watch_job_rules: 'auto'
    allow_path_paste: true
    enable_quotas: true
    allow_user_deletion: true
    expose_user_name: true
    expose_dataset_path: true
    expose_potentially_sensitive_job_metrics: true
    # NFS workarounds
    retry_job_output_collection: 3
    # Debugging
    cleanup_job: onsuccess
    allow_user_impersonation: true
    # Tool security
    outputs_to_working_directory: true
    # TUS
    tus_upload_store: /data/tus
 # Additional settings
    allow_user_creation: false
    require_login: true
    allow_user_dataset_purge: true
    welcome_url: /static/welcome.html
  uwsgi:
    socket: 127.0.0.1:5000
    buffer-size: 16384
    processes: 1
    threads: 4
    offload-threads: 2
    static-map:
      - /static={{ galaxy_server_dir }}/static
      - /favicon.ico={{ galaxy_server_dir }}/static/favicon.ico
    static-safe: client/galaxy/images
    master: true
    virtualenv: "{{ galaxy_venv_dir }}"
    pythonpath: "{{ galaxy_server_dir }}/lib"
    module: galaxy.webapps.galaxy.buildapp:uwsgi_app()
    thunder-lock: true
    die-on-term: true
    hook-master-start:
      - unix_signal:2 gracefully_kill_them_all
      - unix_signal:15 gracefully_kill_them_all
    py-call-osafterfork: true
    enable-threads: true
    mule:
      - lib/galaxy/main.py
      - lib/galaxy/main.py
    farm: job-handlers:1,2

galaxy_config_templates:
  - src: templates/galaxy/config/job_conf.xml.j2
    dest: "{{ galaxy_config.galaxy.job_config_file }}"

# systemd
galaxy_manage_systemd: yes

# Certbot
certbot_auto_renew_hour: "{{ 23 |random(seed=inventory_hostname)  }}"
certbot_auto_renew_minute: "{{ 59 |random(seed=inventory_hostname)  }}"
certbot_auth_method: --webroot
certbot_install_method: virtualenv
certbot_auto_renew: yes
certbot_auto_renew_user: root
certbot_environment: staging
certbot_well_known_root: /srv/nginx/_well-known_root
certbot_share_key_users:
  - nginx
certbot_post_renewal: |
    systemctl restart nginx || true
certbot_domains:
 - "{{ inventory_hostname }}"
certbot_agree_tos: --agree-tos

# NGINX
nginx_selinux_allow_local_connections: true
nginx_servers:
  - galaxy
nginx_enable_default_server: false
nginx_conf_http:
  client_max_body_size: 1g

# TUS
galaxy_tusd_port: 1080
tusd_instances:
  - name: main
    user: "{{ galaxy_user.name }}"
    group: "galaxy"
    args:
      - "-host=localhost"
      - "-port={{ galaxy_tusd_port }}"
      - "-upload-dir={{ galaxy_config.galaxy.tus_upload_store }}"
      - "-hooks-http=https://{{ inventory_hostname }}/api/upload/hooks"
      - "-hooks-http-forward-headers=X-Api-Key,Cookie"

Other files (don’t know if these files might help figuring out the error):

ansible.cfg

[defaults]
interpreter_python = /usr/bin/python3
inventory = hosts
retry_files_enabled = false
vault_password_file = .vault-password.txt
# Use the YAML callback plugin.
stdout_callback = yaml
# Use the stdout_callback when running ad-hoc commands.
#bin_ansible_callbacks = True
[ssh_connection]
pipelining = true

requirements.yml

- src: galaxyproject.galaxy
  version: 0.9.16
- src: galaxyproject.nginx
  version: 0.7.0
- src: galaxyproject.postgresql
  version: 1.0.3
- src: natefoo.postgresql_objects
  version: 1.1
- src: geerlingguy.pip
  version: 2.0.0
- src: uchida.miniconda
  version: 0.3.0
- src: usegalaxy_eu.certbot
  version: 0.1.5
- src: galaxyproject.tusd
  version: 0.0.1
- src: galaxyproject.cvmfs
  version: 0.2.14

(For anyone else reading as well) this is essentially identical to the current version of git-gat at step-3, basically the only differences are things we added for gravity + this:

--- req	2022-10-17 14:26:43.511686236 +0200
+++ requirements.yml	2022-10-17 14:24:26.276917863 +0200
@@ -1,4 +1,4 @@
 - src: galaxyproject.galaxy
-  version: 0.9.16
+  version: 0.10.4
 - src: galaxyproject.nginx
   version: 0.7.0
@@ -13,6 +13,6 @@
 - src: usegalaxy_eu.certbot
   version: 0.1.5
-- src: galaxyproject.tusd
+- name: galaxyproject.tusd
   version: 0.0.1
 - src: galaxyproject.cvmfs
-  version: 0.2.14
+  version: 0.2.13

which looks pretty minor, maybe the cvmfs version though? @cbass can you change that to 0.2.14?

I made the changes to the requirements file and ran the playbook, but this unfortunately also didn’t change anything, it keeps giving the same error about the invalid characters.

Then I guess we need to specifically debug the error.

You can use the debug module of ansible to print out information within a playbook. Can you please modify the role, and, right before where it fails, print out relevant variables?

- debug:
    msg: "{{ my_var }}"

Will print out that variable, just write a lot of them and try and figure out which variables look wrong maybe?

Thanks everyone for all the help! I got rid of the error by once more wiping out the cvmfs directories with apt purge cvmfs* and removing everything in /etc/cvmfs. I then also ran the ansible-galaxy install -p roles -r requirements.yml --force command and changed the main.yml file to contain the right syntax for the combination of lists before I ran the playbook again and the error was gone! Unfortunately a new error occured imediately, which I might open another question for and link here.
For anyone else who ran into this error and ran into another error regarding the setup of cvmfs after solving it, check out the next question: ERROR while checking CERNVM-FS for setup.

1 Like