what would be best practice running an endlless php script as a systemd service so that it won't get stuck?
Before the MySQL connection was lost but I created reconnect function.
Now I suspect that something is not working right with CURL after the scripts run for a couple of days. No errors, no idea what's the issue.
Therefore, I believe it might be related to memory/GC because PHP is not built for it originally.
I would like to solve it somehow without writing the entire code from scratch. Ideas?
As you said, php really isn't build for this kind of tasks. That being said, I do actually have something similar running at the day job. Fixed it by creating a lockfile in the php script; if the script fails the file gets 'unlocked' automatically. The first line in the script just checks if the lockfile is locked, if it is the script then quits; otherwise it'll start processing. Using cron I scheduled the script to start every minute; which will actually only run if the lockfile isn't locked. No high-tech solution in any sense, but it's been running for years without issue now.
Just wanna add to what @sjoerdhuininga said. That's absolutely the technique to go about it, provided your use-case can afford abrupt halting of the script and next run can cleanly continue.
I do wanna help you with an edge case you will quickly encounter.
lockfile here is simply a file created anywhere on disk (it could be empty), but let's add timestamp as content to this file. So, if your script fails in between, before it could delete this file at the end of execution, your script will never continue again cuz lockfile exists. Here now you can check the contents, get the timestamp, compare it with current timestamp and see if its more than X minutes (x minutes being the max time it should have taken to run), update the lockfile and just let it run.
hope that made sense.
To add to that: you can just use the flock(); function. It will unlock if the script crashes