I don’t know for you but for me the upgrade was a disaster. Upgrading from v4.1.13, after following the instructions carefully, installing ruby 3.3.5 and libvips, enabling corepack, providing the secrets, making the migrations in two steps, I ended up with a white page with no obvious logs to look at. As I was tired and frustrated, I let it go. But if there are known ways to debug and fix that, I’m all ears.
You’ll want to upgrade to v4.2.12 first, then to v4.3.0, as stated in the documentation. If you want, I can jump on a call to help you figure it out. Most likely you’re hitting into the cookies changing signing algorithms, between v4.2.12 and v4.3.0 this is a rolling upgrade, but between v4.1.13 and v4.3.0 this incurs an outage.
So I need to downgrade to 4.2.12 and start over? How weird that’s not mentioned in the release notes, as 4.2.13 was already released! Thank you for the helpful notice!
But as mentioned, if you want, I can offer to jump on a call and help, just know that my time is very limited and unpredictable at the moment due to my heart condition.
Okay, in which case upgrading should be fairly straight forwards. libvips is also optional (needs to be enabled via environment variable)
I’d guess a white page is something to do with asset compilation failing, we’ve seen a few cases where the rails assets:precompile has failed for people. The advice so far that’s worked has been:
Can you either remove or rename public/packs, and rm -r tmp/cache/webpacker
Then re-run RAILS_ENV=production bundle exec rails assets:precompile; echo $?
If your server is low memory, then you might need to shut the server down completely, since asset compilation takes a significant amount of memory at the moment. The project is in the process of moving to a new asset pipeline that’s significantly more performant, but this is a complex change (work has been on-going for about a year now)
In the end I slept on it, went again through the upgrade process, repeating the same commands from history, and it worked. The only thing I did differently was to upgrade with the services down.
Yeah, that can be wise; I suspect maybe it was related to the cookie rollover thing somehow. I usually do upgrades by going server node to server node stopping services as I cut across to the new version (like blue/green deployments).
E.g., I have /home/mastodon/versions/4.2.12 which is symlinked at /home/mastodon/live, I then start the building of /home/mastodon/versions/4.3.0, and then when I stop the services I swap the symlink for /home/mastodon/live to /home/mastodon/versions/4.3.0.
This means the existing services can continue restarting and whatever, whilst I install dependencies for the new version and compile assets. This is typically reducing the amount of potential downtime I have.