I am trying to understand what the proper workflow is to update my IT to test on test.galaxyproject.org . The short story is that I update and push my Docker image (and new tag) for the IT to quay, then do a “shed_update”:
Run actions/configure-pages@v5
Error: Get Pages site failed. Please verify that the repository has Pages enabled and configured to build using GitHub Actions, or consider exploring the `enablement` parameter for this action. Error: Not Found - https://docs.github.com/rest/pages/pages#get-a-apiname-pages-site
Error: HttpError: Not Found - https://docs.github.com/rest/pages/pages#get-a-apiname-pages-site
Thanks for your reply and for pinging Dev chat! But I’m confused why .gitignore would be relevant, given it hasn’t been touched for 7 years, if I’m interpreting the repo correctly.
I simply forked GitHub - galaxyproject/usegalaxy-tools: usegalaxy.* common tools and am trying to follow a workflow another dev had shared some time back. I’d welcome a pointer to whatever official documentation might exist for updating an IT, to see if I’m doing something obviously wrong.
Given that the IT updates happen just once a week (on Saturdays, so I’ve read), I’ve managed to lose several weeks of testing.
The gitignore comments was about your pcstudio_shed repository but now I also think that is also wrong!
Do I see a duplicated block in the yaml? What happens if you reduce down to just one?
My guess – the duplicated block was trigging the index at the primary repo (building out the Pages there) to get triggered twice. Since it was for the same exact content, the error was given.
Earlier this morning I was able to wrangle in one of our ToolShed experts and they let me know that it appears that the process worked this morning around 8am EST. However, I am not able to access the tool in the toolshed via the UI – but this can be cosmetic, the same as the tool panel items. What do you think?
I sincerely appreciate your effort on this! Yes, I did notice that my latest “0.6” was installed and I was able to test. (I have no idea why is was installed successfully this time and not before). Unfortunately, my test showed that the new functionality in my IT fails. This workflow of waiting a week for a new install (if lucky) from the toolshed, in order to test, is simply not practical for my timeline. I plan to revert to trying to test locally, i.e., run a Galaxy server on Ubuntu, and test my IT there, even though in the past I encountered issues related to using the “get” function from galaxy_ie_helpers when testing locally. This function is critical for what I need to do.
Another week, another failed attempt to update my IT’s version for testing. I wanted to push version 0.7 to be used by https://test.galaxyproject.org etc, but apparently I still do not understand the proper workflow. Rf.
Your tool shed repos look good to me, and the others with the prior metadata problems also seem to also be repaired!! The series of PRs this morning likely cleaned things up a bit!
Fwiw: I think your workflow looks good! The issues you were having were likely on our side. The Test server and toolshed used to have a banner message that said “Test is for Breaking ”. And, while all of this now is much more standardized, the sentiment is a still a bit true. Later, once you are publishing to the Main toolshed, all will be much smoother.
The prior issues were likely just an accumulation of outdated metadata. During the release cycle, making changes is a bit tricky since so much is frozen during the first integration test cycle. Now that the release has progressed further, I would expect updates to resume in the Test envs at the normal pace, plus the toolshed backbone itself had some stability updates too that should help.
Lots of moving parts! Please give this a review and let me know your thoughts!
Thanks for following up! It does indeed to seem to be running 0.7 now - both test and usegalaxy. Related, am I allowed to run my tool only from one, not both simultaneously? It seemed to be in “Waiting for InteractiveTool result view(s) to become available” on the latter for a very long time, until I stopped the one running on test.
Guess I’ll see how my next attempted version update goes… may happen later this week.