Trinity error /cvmfs: No such file or directory

I have a small set of paired data that I ran Trinity on, but was returned the following error:
/ocean/projects/mcb140028p/xcgalaxy/main/staging/51323425/.cvmfsexec/mountrepo: line 70: cd: /ocean/projects/mcb140028p/xcgalaxy/main/staging/51323425/.cvmfsexec/dist/cvmfs/cvmfs-config.cern.ch/etc/cvmfs: No such file or directory

You can find my history here:

1 Like

Hi @ehaggard

This happened when using this version of the tool at UseGalaxy.org, correct?

  • Trinity de novo assembly of RNA-Seq data(Galaxy Version 2.15.1+galaxy0)

We’ve seen this a few times, and are still working to correct the problem.

For now, please try using one of these versions instead.

  • Trinity de novo assembly of RNA-Seq data(Galaxy Version 2.9.1+galaxy2)
  • Trinity de novo assembly of RNA-Seq data(Galaxy Version 2.9.1+galaxy1)

How to switch tool versions → Changing the tool version

Thanks for reporting the problem, and our apologies for the current inconvenience!

Thanks Jenna,

I should note that I did use a test dataset for a run on the same version of (Trinity de novo assembly of RNA-Seq data - Galaxy Version 2.15.1+galaxy0)
and it completed without error. History for that run is here:

https://usegalaxy.org/datasets/f9cad7b01a4721358c089e8d379f2256/details

It was a smaller dataset, but no problems. I’ve switched the version to Galaxy Version 2.9.1+galaxy1 and am re-running my personal data. If it works I’ll let you know.

Best,

1 Like

Interesting. We did do some cluster changes recently, and this tool was involved. Maybe this is a resource issue.

Alternatives include the SPAdes suite of tools. Personally, I think that using Shovill with the SPAdes option is super useful. When a job fails, the error logs detail exactly what the tool was doing for data reduction before assembly. That information can help you to tune up the inputs before running assembly, to reduce the direct load on the tool – and can help to get a better overall scientific result e.g. providing estimated assembly size, pre-removing redundant reads (using your own criteria), and the like.

Hope of these works out for you! And we’ll look into the resource limits … maybe more can be allocated. Later on though – after the GCC conference at least, so late July early August at the soonest. If you or others want to ask for an update then we’ll know more.

Quick update, I switched versions of the tool and the run failed almost immediately with what looks to be the same error, history below:

https://usegalaxy.org/datasets/f9cad7b01a4721359a497438a2746129/error

I’ll try the next version, but I don’t have high hopes.

Update, also failed rather quickly with Galaxy Version 2.9.1+galaxy2

Hi @ehaggard

Could you share a link back to your history? Or, you can copy/paste the entire contents of the Job Information view back (the “i” icon within one of the error datasets. That is different from the bug icon that only reports some of the info. But I’d really rather review and potentially test with your exact data – will be faster to diagnose the root issue.

How to do either is included here: Troubleshooting errors

Right. I got so caught up in sharing the error that I didn’t make my actual history available. This was a collection that should have been made of only paired trimmed data, but it looks like there may be some unpaired data there, you can view the information to verify, but I don’t want to waste your time. I am going to upload paired trimmed data from my hard drive and try again and I will update you. My history is in the link below:

https://usegalaxy.org/u/haggard/h/t-pileolus-pilot

Hi Jennifer,

My latest run failed, history is linked below. There are two failed runs, but the data I’m really interested in is 19 and 20, which are an assembly of dataset 16 (TP_Paired_Trimmed). This was on version 2.15.1+galaxy0. I’m trying 2.9.1+galaxy1, but I’m guessing the same error is going to show up since it has shown up in every run I’ve done.

https://usegalaxy.org/u/haggard/h/subset

If you want to review or test my data, dataset 16 would be the set to use.

Best,

Hi @ehaggard

Thanks – and I set up some tests in a copy of your history. These will take some time to complete. https://usegalaxy.org/u/jen-galaxyproject/h/copy-of-subset-httpshelpgalaxyprojectorgttrinity-error-cvmfs-no-such-file-or-directory104038

If all else fails – you could try at UseGalaxy.eu or UseGalaxy.org.au instead for now. If this is an actual cluster issue with this tool at ORG (and, for the most current version, that seemed true already), it will be about a week at least until fixed, possibly even two.


Most current version in your run presented with a different error (not a cluster problem, the job just ran out of resources). Improving the read quality before assembling should help.

So far:

  1. Read pairs are intact – see Fastq info reports
  2. Reads need 5’ trimming – see FastQC reports
  3. Overrepresented present – see FastQC reports (not identified, BLAT intron hits versus mammals)
  4. Sequence Duplication – a bit high in all samples. Subsampling may help, or try SPAdes/Shovel
  5. Genome Guided mode is another alternative – map reads, then use Advanced settings
  6. The reads as-is will assemble with Shovel/SPAdes. See the full stderr log for some details about the read profiles. Trinity failed on those, and is still executing on the post-QA reads I created for the test. Everything is labeled, and you can get a copy now then again later once everything is done.
  7. Note: Trimmomatic with the defaults might not have been enough trimming with my test. The 5’ ends are still have ~15 bases that could be clipped. You’ll need to review. Once you have good enough FastQC results, can try Trinity again. How to prep reads: https://training.galaxyproject.org/training-material/search2?query=quality