Great, thanks for sharing the history @SSL !!
The job is still running and I see some clues in the FastQC reports for that sample.
Notice the rate of unique reads if deduplicated? This sample is less than 20%. Other samples are hovering around 60%. This sample also has something unknown flagged as overrepresented, and that is likely leading to the reported GC content issue. What that is could explored, although, letting this process will likely filter it out from the mapping result. Once you have the mapping results, you can explore those and decide if trimming is needed (these reads do have adaptor remaining, which again, mapping tools can sometimes handle, but the mapping processes faster and more predictably when removed – maybe all samples would benefit, or maybe it doesn’t matter that much).
I would let this process. Meanwhile, you can explore the QC reports if curious – the Help on the FastQC tool form has links to our tutorials that explain how to review plus links to the original author’s guidance. You may also be interested in this topic → Quality Control Start Here! multQC issue and guidance?. The shared workflow in that topic would be appropriate for your reads from what I can tell, and would run the FastQC steps, apply trimming, then FastQC again (to make sure QA did what you expected!), and put all of that into a nice visual summary. It may help your later samples to process smoother.
The alternative is to kill the lingering jobs, then filter the collection to “remove failed jobs”, and drop the outlier sample from your analysis. This is a judgement call and you may be able to rescue it with some QC, but not while it is already running in this particular job, and I wouldn’t suggest applying QC “rules” to some samples selectively and not to others or you may introduce bias. Those criteria need to be set, applied to all samples, then the downstream steps launched.
Hope this provides some insight into what is going on, and some options! ![]()