Send me the script and I'll take a look at it.
I had set it up to run on my webserver as I didn't have access to the TNX hosting, was a quick hack to get it all working. Zip file with all the source code is at
https://meisanerd.com/pb.zip
dl_pb.php is what does the actual downloading. Give it a PB url and it will attempt to grab the full res image file behind it. I found it on another forum by someone else doing this, I think I had to do a couple tweaks as PB changed a few things when they first did the ransom stuff to try to prevent scraping.
photobucket.php is what the guys would use. It was a basic web interface that asked for a TNX URL and a password (to make sure it was only being used by us, rather than the general internet). It executes photobucket.py in the background.
photobucket.py would visit the TNX URL (and automatically go through the pages if there were more than one for a topic) and grab all the PB links. It would then execute dl_pb.php to attempt grab the image file
photobucket.sh was for me to manually test the script on the server side.
Ideally this would be set up on the server somewhere, at which point you could in theory just do a DB query for all PB links and run those through dl_pb.php, doing automated replacements. I didn't have that level of access to the server though, so this script was set up in a way that an army of volunteers could just go through the topics, copy the link in, then would share the zip file with the admins via Dropbox or Google Drive or something. The admins could use the file to know which posts the images related to, and manually did the upload/change link process. Since you guys seem to be coder types, I am guessing the automated system wouldn't be out of your reach, but if you want to just use it the way TNX did, I can whitelist XN links and see about getting the automated paging working. I am not sure my current script would even work on TNX now that they redid all the URLs anyway.