Hi @SSL
What happens if you completely refresh the view? Do this by clicking on the “Galaxy” HOME icon in the very top left of the masthead bar.
If that is not enough, then you probably still have a job pending completion. Each of the input fastq files (or, pairs) will have a separate RNA-STAR job, then each of those will have a log of the statistics. You are looking at the log for one of the completed jobs in your screenshot. Since you have 24 jobs, there will eventually be 24 of these logs, one per fastq input or pair of inputs.
To view the status of each job, you can click on the collection folder 532 in the history. This will drill down into the listing of elements inside that collection folder (RNA-STAR jobs in this case, the BAM result).
Maybe there is something special about that sample? You may be able to see that in the FastQC results, but if the issue is about scientific content (where the reads are mapping, and how they are mapping, instead of the read quality itself) the basic QA stats may not mean too much. Letting the job complete is the only way to get the logs from RNA-STAR itself (multimapping/non-specific reads can lead to extended job runtimes, but also other cases, some of which you can see called out in the statistics in your screenshot).
Since this has been 10 days (of queuing time and execution time), it may might there is a problem but my immediate guess is that these last jobs were simply queued last and are now finally running. How “fast” a batch of jobs run depends on the tool/parameter choice, how many jobs were queued, and how many other jobs you may have running at the same time (in any history) during the total time frame, and to a smaller degree, which server you are working at and how busy their clusters happen to be. The clusters in Galaxy work the same as any other cluster – resources are managed to be fair, some of your jobs run, some of other people’s run, more of yours, repeat until done. Later on, workflows can speed this up since all jobs in a pipleline are queued at the same time due to fewer delays waiting for the next tool to be launched.
From here, you are welcome to share the history and we can help to check the timestamps and confirm this is all running as expected. That would also allow us to follow up with server administrators if needed, but I don’t think that is a problem so far. But meanwhile, I would strongly suggest leaving the work queued and avoiding rerunning, since that always just puts all restarted jobs back at the end of the cluster queues again!
I’ll watch for your reply! 
XRef