Attaching Logs ping endpoints accept HTTP HEAD, GET and POST request methods.

When using HTTP POST, you can include arbitrary payload in the request body. If the request body looks like a UTF-8 string, will log the first 10 kilobytes (10 000 bytes) of the request body, so you can inspect it later.

Logging Command Output

In this example, we run certbot renew, capture its output, and submit the captured output to


m=$(/usr/bin/certbot renew 2>&1)
curl -fsS -m 10 --retry 5 --data-raw "$m"

In Combination with the /fail Endpoint

We can extend the previous example and signal either success or failure depending on the exit code:



m=$(/usr/bin/certbot renew 2>&1)

if [ $? -ne 0 ]; then url=$url/fail; fi
curl -fsS -m 10 --retry 5 --data-raw "$m" $url

The above script can be packaged in a single line. The one-line version sacrifices some readability, but it can be used directly in crontab, without using a wrapper script:

m=$(/usr/bin/certbot renew 2>&1); curl -fsS --data-raw "$m" "$([ $? -ne 0 ] && echo -n /fail)"

Using Runitor

Runitor is a third party utility that runs the supplied command, captures its output and and reports to It also measures the execution time, and retries HTTP requests on transient errors. Best of all, the syntax is simple and clean:

runitor -uuid your-uuid-here -- /usr/bin/certbot renew

Handling More Than 10KB of Logs

While can store a small amount of logs in a pinch, it is not specifically designed for that. If you run into the issue of logs getting cut off, consider the following options:

  • See if the logs can be made less verbose. For example, if you have a batch job that outputs a line of text per item processed, perhaps it can output a short summary with the totals instead.
  • If the important content is usually at the end, submit the last 10KB instead of the first. Here is an example that submits the last 10KB of dmesg output:

m=$(dmesg | tail --bytes=10000)
curl -fsS -m 10 --retry 5 --data-raw "$m"
  • Finally, if for your use case it is critical to capture the entire log output, consider using a dedicated log aggregation service for capturing the logs.