Maurice Renck - Development (team) https://maurice-renck.de/en en Fri, 01 Aug 2025 10:45:00 +0200 Using Signal to create notes https://maurice-renck.de/en/learn/built-with-kirby/notizen-mit-signal https://maurice-renck.de/en/@/page/psssna9zltgcwepz Fri, 01 Aug 2025 10:45:00 +0200 Maurice Renck kirby-cms In this article, we will write a script that allows us to create a note in Kirby using the Signal messenger.

This is an experiment. Everything I describe in this article should be taken with a grain of salt and understood as a proof of concept.

In this post I already explained how we can create and publish a new note using Kirby and Raycast. We'll now use this code as a basis and take it to the next level.

The goal of our experiment: We want to send ourselves a message in Signal, and this message should be posted online as a note on the Kirby blog. To do this, we'll use the POSSE endpoint described above.

Signal CLI

In order to capture messages and do something with them, we need some kind of access. The Signal CLI project helps us here, as it thankfully offers a wrapper that provides us with a REST API.

Once launched, we can register this tool as a new device in our account. Therefore, here's the warning again: This is an experiment! If configured incorrectly, this setup could, in the worst case, give potential attackers access to all your messages!

We'll run Signal CLI in a Docker container. To achieve this, we'll create a docker-compose.yaml:

services:
  signal:
    image: bbernhard/signal-cli-rest-api:latest
    container_name: signal-api
    environment:
      - MODE=native #supported modes: json-rpc, native, normal
    ports:
      - "8080:8080" #map docker port 8080 to host port 8080.
    volumes:
      - "./signal-cli-config:/home/.local/share/signal-cli"

We'll use the Signal-CLI-Rest-Api Docker image as our basis. We don't have to do much besides configure a few basic settings. The volume is important; this is where metadata is stored, such as the account name. It's critical to keep this directory intact so we don't have to constantly add a new device.

We can test the setup with the following command:

docker-compose up

We can then access the following URL in the browser:

http://localhost:8080/v1/qrcodelink?device_name=signal-api

We now need to add our running CLI instance to the trusted devices. In the Signal app, we go to our avatar and then "Paired Devices" → "Add New Device." Once we've scanned the QR code in the browser, we're ready to go.

Signal CLI can now access all messages. But we can't do anything with that yet. We still need a link between Signal CLI and our POSSE endpoint. To do this, we'll write a small Typescript.

The Script

To do this, we create the directory bot and create a package.json there so that we have all the dependencies ready:

{
  "name": "signal-bot",
  "version": "1.0.0",
  "main": "dist/index.js",
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  },
  "dependencies": {
    "axios": "^1.6.0",
    "dotenv": "^16.3.1"
  },
  "devDependencies": {
    "typescript": "^5.4.0",
    "@types/node": "^22.0.0"
  }
}

As you can see, the scope is limited. In the next step, we create a tsconfig.json file.

{
  "compilerOptions": {
    "target": "ES2020",
    "module": "commonjs",
    "outDir": "./dist",
    "strict": true,
    "esModuleInterop": true,
    "types": ["node"]
  },
  "include": ["index.ts"]
}

First, we'll start by reading the most important key data. We'll use environment variables.

import axios from "axios";

const SIGNAL_API = process.env.SIGNAL_API!;
const SIGNAL_NUMBER = process.env.SIGNAL_NUMBER!;
const TARGET_API = process.env.TARGET_API!;
const KIRBY_SECRET = process.env.KIRBY_SECRET!;

We need the API endpoint provided by our Docker container, our own phone number, the Kirby POSSE endpoint, and the Kirby secret.

async function poll() {
  try {
    const res = await axios.get(
      `${SIGNAL_API}/v1/receive/${encodeURIComponent(SIGNAL_NUMBER)}`,
    );
    const messages = res.data;

    for (const msg of messages) {
      const sender = msg.envelope?.source;
      const destination =
        msg.envelope?.syncMessage?.sentMessage?.destinationNumber;
      const text =
        msg.envelope?.dataMessage?.message ||
        msg.envelope?.syncMessage?.sentMessage?.message;

      if (sender === SIGNAL_NUMBER && destination === SIGNAL_NUMBER && text) {
        console.log(`Forwarding message: ${text}`);
      }
    }
  } catch (error) {
    if (error instanceof Error) {
      console.error("Error polling messages:", error.message);
    } else {
      console.error("Unknown error:", error);
    }
  }
}

We start by calling the Signal endpoint. It returns a list of unread messages, which we then iterate through. We need some data to retrieve:

  • The sender of the message
  • The recipient of the message
  • The body of the message

There's a small peculiarity with the body of the message. Since we'll be sending the text to ourselves (note to self), this text will immediately be marked as read, and we'll no longer find the text under envelope?.dataMessage?.message as it would normally, but under envelope?.syncMessage?.sentMessage?.message. To be absolutely sure, we query both values. If the first is empty, we fall back to the sent data.

Since we respond to every message, and thus every message in every chat would create a note in Kirby, we first need to filter the matching messages:

if (sender === SIGNAL_NUMBER && destination === SIGNAL_NUMBER && text)

This query ensures that we only respond to messages sent from our own number to our own number that contain a text. This gives us some protection.

Now we fill in the if block. We want to send the API request to Kirby in it:

let skipUrl = false;
let ext = false;
let autopublish = true;

// Find and remove hashtags, set flags
let parsedText = text
  .replace(/#skipUrl\b/gi, (_match: any) => {
    skipUrl = true;
    return "";
  })
  .replace(/#ext\b/gi, (_match: any) => {
    ext = true;
    return "";
  })
  .replace(/#draft\b/gi, (_match: any) => {
    autopublish = false;
    return "";
  })
  .trim();

await axios.post(
  TARGET_API,
  {
    posttext: parsedText,
    externalPost: ext,
    skipUrl,
    autopublish,
  },
  {
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${KIRBY_SECRET}`,
    },
  },
);

First, we set a few default values. We defined them in the POSSE post mentioned above. They allow us to control Kirby.

  • skipUrl determines whether a link to the note should be attached when posting to Mastodon.
  • ext determines whether to post to Mastodon or Bluesky at all.
  • autopublish determines whether the note should be published immediately or saved as a draft.

Since we don't have suitable buttons in the Signal chat, we use hashtags: #skipUrl, #ext, and #draft.

We search for these hashtags in the text, ignoring case, and set the corresponding variables. Finally, we remove the hashtags from the text so they no longer appear in the note.

I have autopublishing enabled and deactivate it specifically with the hashtag #draft. This can, of course, be reversed if necessary.

Finally, we send the API request to Kirby. As you can see, it contains all the necessary fields and our secret in the header.

We're almost done. We just need to call our method regularly to check for new messages:

setInterval(poll, 10_000);

Docker Compose

Before we finish, we need to adjust our docker-compose.yml. We want to run our bot together with Signal CLI. To do this, our bot first needs a Dockerfile:

FROM node:22-alpine

WORKDIR /app
COPY . .
RUN npm install
RUN npm run build

CMD ["node", "dist/index.js"]

We'll use the node-alpine image as a base, install all the necessary packages, build our script, and run it.

There's certainly room to optimize and save space here, but it's fine for our experiment.

Now we need to extend our docker-compose.yml. Below the existing signal code, we'll add:

services:
    signal:
        # see code above
    bot:
        build: ./bot
        container_name: signal-bot
        environment:
          - SIGNAL_API=http://signal:8080
          - SIGNAL_NUMBER=+491234567890
          - TARGET_API=https://my-blog.tld/posse
          - KIRBY_SECRET=abc-def-ghi
        depends_on:
          - signal
        restart: unless-stopped

Here we set our environment variables. We use the container name for the Signal endpoint. We also need to specify our phone number with the country code and, of course, our Kirby POSSE endpoint and secret.

Our bot waits for the Signal CLI to be available before booting.

To build our bot, we need to call this command whenever we make code changes:

docker compose up --build

After that, it is enough to call docker-compose:

docker-compose up

If the containers should run in the background, we can simply add -d:

docker-compose up -d

Done!

Now we can send a message to ourselves:

Our script picks up this message:

Sends it to Kirby and publishes the note:

And the IndieConnector plugin forwards it to Mastodon because we've enabled external posting:

Now we can post to our notes from our phone, Twitter-style, and share them as we please.

Finally, it should be noted that all messages we send to ourselves will be published. Anyone using Signal's Note to Me feature for other purposes should expand the logic a bit and add another hashtag for posting. This way, #posse could trigger the process and ignore all other messages. Perhaps a good idea to avoid accidental notes.

It should be mentioned again that this is an experiment intended to demonstrate what we can do with our blogs. Anyone who wants to run the script in this format should be aware of the security risks associated with running Signal CLI, especially if the API endpoint is made publicly available.

Nevertheless: Have fun trying it out!

]]>
Using Raycast to post to Kirby and Mastodon https://maurice-renck.de/en/learn/built-with-kirby/raycast-kirby-posse https://maurice-renck.de/en/@/page/Qa2bgqLrSVXBNZks Tue, 10 Jun 2025 09:00:00 +0200 Maurice Renck kirby-cms, indieweb In this guide, we will develop a Raycast plugin that allows us to create a note that will be published not only on our page, but also on Mastodon and/or Bluesky.

With just one shortcut, we will always have access to the form in Raycast. It will be sufficient to enter text and send it to create a note or bookmark in Kirby-CMS, and distribute them from Kirby to other platforms (POSSE).

POSSE

POSSE is not a funny performance in this case, but stands for: Publish (on your) Own Site, Syndicate Elsewhere. It’s about always publishing first on your own site and distributing from there.

On the Mac, we will use Raycast. Raycast allows us to quickly execute commands, assign these commands shortcuts, and a alias.
I want to use the keyboard shortcut SUPER+N on my Mac to open this window and create a new note:

Here, I can enter a short text when I would like to create a bookmark, a URL, an optional page title, and finally, I can also set whether the post should be published immediately, whether it should also be posted to Mastodon, and whether a URL should be attached to the post at Mastodon.

What do we need?

  • A Kirby Route plugin that provides a route to create a new note
  • A Raycast Plugin that allows for quick input and calls the Kirby route
  • The IndieConnector plugin, which handles the POSSE part.

Basics

On my website, I distinguish between two types of notes:

  1. Simple text notes
  2. Bookmarks

Both use different templates and behave differently on the page.

To automatically post to Mastodon, I use the IndieConnector plugin. The installation is done in our example via Composer. In the main directory of Kirby, you run:

composer require mauricerenck/indieconnector

This installs the IndieConnector plugin, so we can now configure it. To accomplish this, we go to the file sites/config/config.php and make the following changes:

'mauricerenck.indieConnector' => [
    'secret' => 'my-secret',
    'sqlitePath' => './content/.db/',
    'stats' => [
        'enabled' => true,
    ],

    'post' => [
        'prefereLanguage' => 'en',
        'allowedTemplates' => ['bookmark', 'note'],
        'textfields' => ['text'],
    ],

    'mastodon' => [
        'enabled' => true,
        'instance-url' => 'https://example.com',
        'bearer' => 'my_bearer',
    ],
],

First, we set a secret that is used in various places within the plugin to secure routes and webhooks.

We want to utilize the panel statistics and, if necessary, a few other functions that require a database. Therefore, we need to configure a path where the database can be stored.

Next, we configure the general settings to be able to post to Mastodon and other services. I set the preferred language to English because I write in both German and English, and although German is the default language in Kirby, I post in English.
As described above, I have two templates for my notes, but only these two templates should trigger a post to Mastodon. Therefore, I set allowedTemplates to these two template names. Finally, I tell the plugin in which field my text is stored. In our example, text.

Let's, configure Mastodon quickly. To do this, we set the instance and an API token, which can be created at https://example.com/settings/applications. The new app requires at least the following scopes:

  • read
  • write:media
  • write:statuses

The generated token should be stored somewhere safe.

Starting now, the IndieConnector plugin will create a Mastodon post whenever we publish a new note in Kirby.

Raycast Plugin

Raycast extensions are written in TypeScript and use React for UI components. The extension we will be writing consists of a mix. We need a form where we can enter the text, and obviously, we must send this data to Kirby.

I won't explain here how to create a new extension, that can be found here and is relatively easy. I assume that a dummy plugin already exists. I am also publishing my code on GitHub, so you can use it.

Once we start the development with npm run dev, the new plugin appears in Raycast and can be tested.

We’ll begin with the simple part, the form. We’ll create a new command if it hasn’t already been done, and output the form:

export default function Command() {
return (
    <Form
      actions={
        <ActionPanel>
          <Action.SubmitForm onSubmit={handleSubmit} />
        </ActionPanel>
      }
    >
      <Form.TextArea id="posttext" title="Post Text" placeholder="Enter your text" enableMarkdown={true} />
      <Form.TextField id="url" title="Bookmark" placeholder="https://" value={url} onChange={setUrl} />
      <Form.TextField id="title" title="Titel" placeholder="Titel" />
      <Form.Separator />
      <Form.Checkbox id="autopublish" title="Autopublish" label="Autopublish" defaultValue={preferences.autopublish} />
      <Form.Checkbox id="skipUrl" title="Skip URL" label="Skip URL" defaultValue={preferences.skipurl} />
      <Form.Checkbox id="externalPost" title="External Post" label="External Post" defaultValue={preferences.posse} />
    </Form>
  );
}

This creates a Form like the one in the screenshot. I’ll explain where the defaultValues come from shortly. In the form, you can see that handleSubmit() is called when submitting. This is where the actual API call takes place:

  async function handleSubmit(values: Values) {
    try {
      const baseUrlNoTrailingSlash = preferences.baseurl.replace(/\/$/, "");
      const response = await fetch(`${baseUrlNoTrailingSlash}/posse`, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          Authorization: `Bearer ${preferences.secret}`,
        },
        body: JSON.stringify(values),
      });

      if (!response.ok) {
        throw new Error(`HTTP error! status: ${response.status}`);
      }

      const responseData = (await response.json()) as ApiResponse;
      await Clipboard.copy(responseData.url);

      const status = responseData.status == "draft" ? "Posted as draft" : "Published your post";

      showToast({ title: status, message: "The URL has been copied to your clipboard" });
      await delay(1000);
      await closeMainWindow();
    } catch (error) {
      console.error(error);
      showToast({ style: Toast.Style.Failure, title: "Error", message: "Failed to submit form" });
    }
  }

We call the API with fetch, accessing two values that can later be set in the extension settings: the base URL and a secret. This way we avoid hardcoding private data in the source code.

We are making a POST call and directly inserting the values we receive from the form.

If something goes wrong, we generate an error in the catch block, that will cause a message under the form.

Everything is running smoothly, we read the response from the API, which will contain a URL that we immediately copy into the clipboard, inform the user about it, and then close the window after one second.

I’ve added a small bonus that occurs when calling the extension in the form of a hook.

  const [url, setUrl] = useState("");

  // Read clipboard when the form is opened
  useEffect(() => {
    async function fetchClipboard() {
      const text = await Clipboard.readText();

      if (text) {
        if (text.startsWith("http")) {
          setUrl(text);
        }
      }
    }

    fetchClipboard();
  }, []);

As soon as the form appears, I read the clipboard and check if it contains a link. If it does, I set the local state url. A look at the form's code shows that this URL is then set as the value for the bookmark field. Because I often copy a URL in the browser to share it directly, this small refinement saves me a few keyboard shortcuts.

React Hooks

React Hooks are called in certain situations. In this case, the hook is called when rendering the form and the contained code is executed.

React States

A React state is always defined with a variable name and a setter. The variable will only be updated through the setter in the future. Whenever the variable is used, an automatic update/render is triggered when its value changes.

That’s how we could actually work now, but we’re still missing all the values from the settings, i.e. everything stored under preferences. Here is the appropriate type for better clarity:

interface Preferences {
  secret: string;
  baseurl: string;
  autopublish: boolean;
  posse: boolean;
  skipurl: boolean;
}

Before we do anything else in the command, we get the settings:

  const preferences = getPreferenceValues<Preferences>();
  const markdown = "You need to provide a secret in the extension preferences.";
  if (!preferences.secret) {
    return (
      <Detail
        markdown={markdown}
        actions={
          <ActionPanel>
            <Action title="Open Extension Preferences" onAction={openExtensionPreferences} />
          </ActionPanel>
        }
      />
    );
  }

Settings

Settings can be stored on two levels, we'll use both:

  1. Global for all Commands
  2. For a specific Command

The preferences are defined in the package.json.

baseUrl and secret will be stored globally. We may add more API calls later, so we’ll need them there as well. All other settings are only needed in the form and will therefore only be stored for the command.

At the highest level of the package.json file, we add a new entry and define the values:

  "preferences": [
    {
      "name": "secret",
      "title": "Secret key",
      "description": "Enter your API token",
      "type": "password",
      "required": true
    },
    {
      "name": "baseurl",
      "title": "Base URL",
      "description": "Enter your API endpoint",
      "type": "textfield",
      "required": true
    }
  ],

When we first activate the extension, we must first enter values for these global settings. This will make sure they are set when we use the command.

Now we define settings that apply to our command and are not useful for possible other commands:

"commands": [
    {
      "name": "new-note",
      "title": "New Note",
      "description": "Creates a new note",
      "mode": "view",
      "preferences": [
        {
          "name": "autopublish",
          "title": "Autopublish",
          "description": "Automatically publish the note",
          "type": "checkbox",
          "default": false
        },
        {
          "name": "skipurl",
          "title": "Skip URL",
          "description": "Skip URL",
          "type": "checkbox",
          "default": true
        },
        {
          "name": "posse",
          "title": "POSSE",
          "description": "Post to other platforms",
          "type": "checkbox",
          "default": true
        }
      ]
    }
],

This time we're not quite as strict. All settings have a default value, so they don't have to be forced to be configured. We use these settings for the three checkboxes in the form. So, for example, I don't have to click to automatically publish my post every time.

We already access the values with defaultValue={preferences.posse} in the form.

Once you finish npm run dev, the plugin will be available to you in Raycast like other plugins.

That means we’re ready to go. However, we still need the receiving side in the form of a Kirby route.

The Kirby Route

We have several ways to set up the Kirby route:

  1. Directly in the config.php
  2. As a plugin

Since the code is identical, I’m leaving this decision up to you. I have a plugin that collects these special routes so that my config.php stays clean. However, the code is the same in both cases.

'routes' => [
    [
        'pattern' => 'posse',
        'method' => 'POST',
        'action' => function () {
            // FOLLOWING CODE
        }
    ]
]

The route should be called under /posse as a POST request. Before we do anything, we check if the token is correct. We retrieve the data from the request, which is sent via the POST request. We do this using the Kirby Request methods1. There, we check if an Authorization Header with our token is present:

$request = kirby()->request();
$requestData = $request->data();

$authHeader = $request->header('Authorization');
$token = $authHeader ? explode(' ', $authHeader)[1] : null;

if ($token !== option('mauricerenck.posse.token', '')) {
    return new Response('Unauthorized', 'text/plain', 401);
}

In the config.php file, we now need to set our token. If the token from the URL does not match, we immediately terminate and provide a corresponding feedback. To be completely sure, we set the default value of the token to an empty string. If the token matches, the script continues to run.

To create a new note, we first need a page to do so on. In my case, there is the notes page, which is further divided by years. We attempt to retrieve this page, and if it is not found, we immediately stop.

$year = date('Y');
$parent = kirby()->page('notes/' . $year);

if (is_null($parent)) {
    return new Response('Not Found', 'text/plain', 404);
}

Now we can be sure that all requirements are met.

In the next step, we will take the submitted values and set necessary variables, also catching any potentially missing values:

$template = 'note';
$autoPublish = isset($requestData['autopublish']) && filter_var($requestData['autopublish'], FILTER_VALIDATE_BOOLEAN);

$newData = [
    'title' => empty($requestData['title']) ? 'Bookmark ' . date('Y-m-d') : trim($requestData['title']),
    'text' => !empty($requestData['posttext']) ? trim($requestData['posttext']) : '',
    'icSkipUrl' => isset($requestData['skipUrl']) && filter_var($requestData['skipUrl'], FILTER_VALIDATE_BOOLEAN),
    'enableExternalPosting' => isset($requestData['externalPost']) && filter_var($requestData['externalPost'], FILTER_VALIDATE_BOOLEAN)
];

In my example, we distinguish between two templates, a simple text note note and a bookmark bookmark. First, we set the standard template note.

We receive all data as a string, including Boolean values. With $autoPublish = isset($requestData['autopublish']) && filter_var($requestData['autopublish'], FILTER_VALIDATE_BOOLEAN); we can safely convert the value into a true Boolean variable. We do this for all suitable values.

Our Kirby page needs a title. If no title is entered in Raycast, we will use a generated title.

The actual text will usually be present, but we still ensure that at least an empty text is provided.

We also set two values of the IndieConnector, namely whether the URL should be attached to the Mastodon post and whether we want to post to Mastodon (or Bluesky) at all.

Then we check if a valid URL was submitted. If so, I differentiate here between a plain text note and a bookmark. This is reflected in the title, the link, and the template.

if (!empty($url) && V::url($url)) {
    $newData['title'] = 'Link: ' . $newData['title'];
    $newData['link'] = $url;
    $template = 'bookmark';
}

Before we create the page, we need to ensure one thing: we must avoid duplicates. When we create a note titled test and doing this a second time, Kirby will throw an error, which prevents the second page from being created because it would overwrite the first one. We need to handle this.

$slug = Str::slug($newData['title']);
$unusedSlug = false;
while ($unusedSlug === false) {
    $unusedSlug = is_null($parent->childrenAndDrafts()->find($slug));
    if (!$unusedSlug) {
        $slug = $slug . '-' . uniqid();
    }
}

We generate the slug for the new page, which will correspond to the folder name. Then we loop through a list of all child pages and check if a page with that slug already exists. If it does, we add an ID to our slug. In the second loop iteration, we should no longer find any page, and we’ll be on the safe side. If, unexpectedly, the new slug also exists, the loop continues until no page is found.

Now we can finally create the new page. To do this, we need the appropriate permissions which we obtain using impersonate():

kirby()->impersonate('kirby');
$newPage = Page::create([
    'parent'  => $parent,
    'slug'     => $slug,
    'template' => $template,
    'content' => $newData
]);

Initially, new pages are always a draft. If we told at Raycast that the page should be published immediately, we now need to do that:

if ($autoPublish === true) {
    $newPage->changeStatus('listed');
}

We are almost finished. We want to tell Raycast the result:

$response = [
    'url' => $newPage->url(),
    'status' => $setToPublished ? 'published' : 'draft',
];

return new Response(json_encode($response), 'application/json', 201);

In Raycast, this response shows a success or error message. And the URL is copied to the clipboard so we can open the note.

Refinements

We can now quickly create new notes and publish them directly as Mastodon or Bluesky posts, and this with a manageable codebase.

Those who wish can now continue to refine the flow. We could, for example, retrieve answers from Bluesky back into the notes. I will soon publish a post about this, as the IndieConnector can do so.

You can find the complete Kirby route code here:

Kirby POSSE Route
Kirby POSSE Route. GitHub Gist: instantly share code, notes, and snippets.

You can find the Raycast plugin source code here:

GitHub - mauricerenck/raycast-posse: Create a new Kirby page directly from Raycast
Create a new Kirby page directly from Raycast. Contribute to mauricerenck/raycast-posse development by creating an account on GitHub.

Have fun posting!

I decided against officially publishing both plugins because customization can be very individual on the Kirby side. However, if there is a need, please write me a comment!

]]>
Quicktip: Custom Panel CSS https://maurice-renck.de/en/learn/built-with-kirby/quicktip-custom-panel-css https://maurice-renck.de/en/@/page/l0klxANMizi613wg Wed, 19 Feb 2025 13:30:00 +0100 Maurice Renck kirby-cms

With a simple trick, I hide fields that cannot be translated in the Kirby panel.

In the Kirby Panel, I use numerous fields that don’t need to be translated or shouldn’t be. These include fields like categories or tags, as well as certain settings that apply equally to all languages.

My main language is German, where I set the values. When I switch to English, I can’t edit these fields, and the values come from the German page:

Since these fields aren’t relevant for translation, I often wondered why they even needed to be visible.

Unfortunately, hiding non-translatable fields in the Kirby Panel isn’t possible, so I had to find another solution.

Custom Panel CSS

If I can’t hide the fields via configuration, I can at least do it with CSS. Fortunately, it’s possible to load a custom CSS file into the Panel and override styles.

This is useful, for example, if you want to brand the Panel for a client — or, in this case, hide fields.

First, the CSS file needs to be loaded. This is done in the config.php file:

'panel' => [
    'css' => '/assets/css/panel.css'
]

Once the file is loaded into the Panel, we can start tweaking things.

Inspecting the elements reveals that each field is wrapped in several div elements. Unfortunately, the information that a field is disabled is only written to the tree at a deeper level:

To hide the fields properly, we need to target .k-column, so that not only the field itself disappears, but also the space it occupies. If we only hide the field, whitespace equal to the field’s height remains.

However, we can only determine at a lower level whether a field is disabled. This information is stored in the data-disabled="true" attribute.

Thanks to modern CSS features and the glorious has() selector, we can indirectly target the parent element of the disabled field:

div:has(> [data-disabled='true']) {
    display: none;
}

Hurray! It’s that easy nowadays. This selector looks for div elements whose children have the data-disabled="true" attribute and hides them.

The result looks like this:

Now, only the relevant fields remain visible — everything else is hidden. It’s not 100% perfect because the remaining fields don’t automatically take up the freed-up space, but that’s a minor issue. The more important part is that only relevant fields are displayed.

With this approach, I can make further adjustments if needed, whether I notice something new or want to tweak the Panel’s design.

Have fun experimenting!

]]>
Obsidian Kirby Sync https://maurice-renck.de/en/learn/built-with-kirby/obsidian-kirby-sync https://maurice-renck.de/en/@/page/A9d1cEyaKP9XaIqV Thu, 30 Jan 2025 10:00:00 +0100 Maurice Renck kirby-cms

After more than ten years with Kirby as a CMS for blogging, I finally have a workflow that I really like.

Kirby is a great CMS to run a blog. Data is stored in text files and texts can be written in Markdown. The panel - Kirby's admin interface - can be customized down to the last detail, and each page can have its own set of fields.

That makes it possible to customize your own blog to your needs and design things to make blogging especially easy.

But as much as I like the surface and have built cool features and plugins for it, there was a big catch with the whole set-up all these years: I don't write my texts directly in the CMS, but in a Markdown editor.

That was once VS Code, then iA Writer, then Ulysses, back to iA Writer and in between often Obsidian. They all had a problem in common: As soon as a text was in Kirby, the Kirby version and the version in the editor diverged.

Who doesn't experience that: A text is online and when you look directly at the page, you still notice an error. A link is not set correctly or there's a typo somewhere. So I correct this error quickly in the panel so that the change is immediately online.
At first, when the motivation was still high, I also corrected these errors in my document in the Markdown editor. But to be honest, at some point I didn't feel like it anymore, pushed it back and eventually stopped altogether.
So the texts on the website are in a different form than they are in the editor.

That wouldn't be a problem if I didn't also like to have the texts in the editor, which is also a kind of database for me.

I am a big fan of the Zettelkasten.

I also think it's a good idea to always have texts as files locally instead of just online.

The first problem is therefore the synchronicity between the two worlds.

The second problem is the state of both documents.
In addition to the text, there are numerous metadata such as tags, a date and the actual status, i.e. whether a document is a draft, unlisted or published. Some editors have solutions for this.

The Markdown editors

I used VS Code to write Markdown at first, but I switched away from it eventually. VS Code is a code editor, after all, and many features that I expect from a text editor are missing, perhaps they could be retrofitted, but conversely, they actually have nothing to do with a code editor either. So I quickly switched back to other apps.

iA Writer

iA Writer is one of the best editors out there. It's minimally designed and still offers all the features you could want when writing. Besides the pretty (and minimal) interface, I especially liked the grammar helpers. I've been using iA Writer since the first version and this feature was one of my most used. I can have different word types displayed, and that way quickly find filler words and unnecessary adjectives.

Here you see iA Writer's style check:

And that's how colorful it can look when you highlight all parts of the text:

Everything can be switched on and off in detail.

But especially great is the possibility of publishing texts via different services/APIs directly from iA Writer:

As you can see, the most popular interfaces are represented here. Particularly interesting is certainly Micropub because the standard is not tailored to a specific CMS. For my case, this was only limited useful, though.

Sebastian has written a great plugin for it, which also allowed me to publish short posts as notes via my smartphone – and my texts from iA Writer.

The plugin is unfortunately no longer being developed. Although it still worked with the latest Kirby 4 versions, it became too risky for me; an unmaintained interface to my own CMS is not a good idea in the long term.

There is also a fundamental problem here: communication only goes in one direction, from the editor to the blog, but not back. It's only meant for publishing.

When it comes to managing my texts, I have never really warmed up to iA Writer unfortunately. The file/folder view never felt clear enough to me, especially when it comes to documents being connected and sorted into different "categories".

Ulysses

Ulysses is a bit better positioned here. Besides folders, you can also create projects, so different content can be grouped well. That helps me a lot because I don't just write texts for my own blog and can create a separate project for each blog.

Ulysses also comes with its own style and grammar check, which works very well; although not as conspicuous as in iA Writer. The recommendations here are often more sophisticated and extensive, usually several suggestions are made on how to formulate something better more elegantly.

Obviously, the surface is also quite minimalistic, although not as minimal as in iA Writer. I generally find iA Writer to be a bit "rounder," especially like the iA Writer Font.

In Ulysses, you can also publish texts directly. As can be seen, all common channels are represented here as well. Unfortunately, there is no Micropub, but instead a direct interface to Micro.blog. A feature request from a few years ago has unfortunately not been able to move anything here so far.

Here the problem of the missing feedback channel remains. Texts can be published but not retrieved back into the editor.

Obsidian

The all-in-one-show.

Obsidian is open source and free. Accordingly, it is widely used. There are numerous themes and at least as many plugins to extend the editor.

Some users' favorite pastime is to tune Obsidian with plugins and self-built constructions so that it works like a project management system; then make endless video series on YouTube and finally complain about how complicated Obsidian is and explain that they are now switching to Notion.

Don't panic! Obsidian can be like that, but you have to make an effort. After installation, Obsidian is first and foremost a text editor with a few basic, helpful functions.

I still opted for a theme because I absolutely wanted something minimal. I use Obsidian Boom with some additional settings so that everything looks clean:

Because I like the font from iA Writer so much, I downloaded and installed it. It is freely available but limited in its usage.

Out of the box, Obsidian doesn't have any special style or grammar features. This can be achieved with a plugin. It then works similarly well to Ulysses:

Here I am using the LanguageTool Integration Plug-in. Unfortunately, it is no longer actively developed, I hope it will remain with us for a while longer, or that someone will find it to resume development.

I also have one of the Focus plugins in use, which dims all UI elements and thus creates a similarly minimal representation as iA Writer and Ulysses.

There are some plugins for syncing text and settings. Most of them are more like backup syncs or for synchronizing between different computers and mobile devices. Here I decided to use Obsidian Sync a while ago. It seems to run the most stable to me, and this way I can support the developers. The version history already saved my butt once when I was experimenting with a plugin of my own; so it was worth the purchase.

That brings us to the main topic: publishing texts. There are also some plugins here, but nothing that would work with Kirby. However, I was happily surprised to find out that Obsidian plugins are written in TypeScript, something I earn my money with and therefore know well.

I couldn't resist any longer and started writing my own plugin.

Connect Obsidian and Kirby

My problem has been mentioned often enough: I want the possibility to keep text synchronized between Kirby and my editor. This affects both the text itself and its metadata.

To achieve this, I need a plugin for both Kirby and Obsidian. The Kirby plugin needs to provide an interface that allows me to read from and write to Kirby. The Obsidian plugin must execute these requests and work with the results, such as updating my local files.

I approached the whole thing experimentally in an extra Obsidian vault for that purpose. I wanted to play around a bit without having to fear data loss.

It has turned out that I need four endpoints that perform the following tasks:

  • Update data in Kirby
  • Update data in Obsidian
  • Create a new page in Kirby
  • Create a new page in Obsidian

In essence, this covers three cases:

  1. I created a new text in Obsidian and now want to publish it in Kirby.
  2. I have a text in Kirby that I haven't locally yet and want to "download" it into Obsidian.
  3. I have the text in both Obsidian and Kirby and want to update one of them.

The Obsidian plugin

As mentioned before, Obsidian plugins are written in TypeScript, which I like very much because I earn my living with it. So I got started and first looked at how everything works in Obsidian. Fortunately, everything is quite straightforward and with Ollama in my IDE, I made quick progress without much basic knowledge of the Obsidian API.

First, I had to figure out what I want to synchronize. Of course, the text itself, but also a handful of metadata. The problem here are the different formats in which metadata is stored. If they were identical, I wouldn't need any plugins and could probably just work directly with the files.

Obsidian stores metadata in the Frontmatter format. That's what it looks like right now when writing this article:

---
type: text
aliases: 
date: 2024-12-05
channel: blog
status: draft
sync: false
slug: 
title: 
intro: 
tags:
---

The representation in Obsidian looks like this:

Kirby, however, stores metadata in a slightly different format:

Title: 

----

Intro: 

----

Text:

----

Date: 

----

Tags: 

As you can see, the two differ slightly in their format and content. While metadata such as date and tags are written directly into the text file for both, Kirby identifies the page status (draft, unlisted, listed) via the folder, thus not storing this information directly in the file.

Kirby does most of the translation work for me. Since I can use all Kirby classes and methods in the API, I don't have to worry about the format. I simply send all metadata and text as separate data sets.

I already have a Kirby plugin called "Internal API" in which I provide various endpoints to do small things or display information on the Mac. So I could simply extend the plugin. Ultimately, it's a plugin that provides a few routes.

Thanks to the relatively simple use case, I could go the classic CRUD1 way here. Actually, only CRU because I deliberately omitted deleting. Here's what the endpoints look like:

[
    'pattern' => 'ENDPOINT/(:any)/(:any)/(:any)',
    'method' => 'VERB',
    'action' => function ($channel, $folder) {
        $request = kirby()->request();
        $requestData = $request->data();
        $requestHeaders = $request->headers();

        if ($requestHeaders['Authorization'] !== 'Bearer ' . option('mauricerenck.obsidian.token')) {
            return new Response('Unauthorized', 'text/plain', 401);
        }

        // DO STUFF

        return new Response(json_encode($data), 'application/json');
    },
],

This route exists three times, with each differing in the Verb:

  • Create -> 'method' => 'POST'
  • Read -> 'method' => 'GET'
  • Update -> 'method' => 'PUT'

The ENDPOINT is actually named something else, of course.

I could have chosen a route here that catches all three cases, but I decided against it and opted for some code duplication to make everything clearer and ultimately more organized.

As you can see, each request is secured by a token. Every request must send the correct token in the header; otherwise it will be rejected. This way, I secure the endpoints; otherwise anyone could simply send and read data.

The structure

Before we continue, a few words about the underlying structure. On my website, I have various sections that I use to separate content somewhat. Essentially, these are Blog, Hub, Notes and soon a fourth section. I'm calling them channels.

So that I don't have endless long directory lists, the respective channels are further divided in the blog and notes by year and in the hub by topics.

So the structure is:

/blog/2025/article-slug/template.en.md

or

/hub/built-with-kirby/article-slug/template.en.md

With a look at the metadata in Obsidian, you've probably already seen something similar:

channel: blog
slug: article-slug

A level is missing, the year or topic. Here I decided to rebuild the structure in Obsidian. If you look at the Obsidian directory tree, it is identical to the Kirby structure:

The ATTACHMENTS directory can be ignored, here Obsidian stores linked files, such as images.
As you can see, there are yearly directories here, just like in Kirby.

This is how I then build the API endpoint URL in the Obsidian plugin:

const options = {
    url: `${this.settings.apiBaseUrl}/${channel}/${folder}/${slug}`,
    method: 'POST|PUT|GET',
    headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${this.settings.apiToken}`
    },
    body: JSON.stringify(data)
};

We have a base URL, that is the URL of my website plus the aforementioned ENDPOINT. Followed by the channel, e.g., blog, the folder, for this post, therefore 2025. Finally, the slug. The slug is ultimately the filename of the Markdown file in Obsidian, in Kirby it's the directory name of the post.

Case 1: Create a new page

One use-case is creating a new page in Kirby. Given that I've already written a text in Obsidian. This should now – we're sticking with the example – be published on the blog.

To avoid synchronizing what shouldn't be synchronized, I built a sync checkbox. Only when this is activated can a file be synchronized at all. You can see this further up in the screenshot.

Overall: The question of what, when, and how is synchronized, and whether it has already been synchronized, came up quite early.

Here, a mixture of different statuses and the sync checkbox are at work.

"how-to-burn-money" is unsynchronized and filed as an idea. This means there are usually only a few bullet points and I don't yet know if it will ever become a text.

"obsidian-kirby-sync" is already in progress, but has not been synchronized. It exists as a new file, but hasn't found its way to Kirby yet.

"example 2" is synchronized and is a draft in Kirby.

"example" is synchronized and published, but not listed.

"sociabli" is synchronized, published, and visible on the blog.

In our example, we have an article in progress, which has not yet been synchronized. Therefore, in the plugin, we must first collect some data locally so that we can send it to Kirby:

const data = await this.readFileData(file);
const channel = data.frontmatter.channel;
const currentYear = new Date().getFullYear();
const folder = file.parent?.name || currentYear.toString();

try {
    // REQUEST CODE FROM ABOVE WITH THIS CHANGES
    // url: `${this.settings.apiBaseUrl}/${channel}/${folder}`,
    // method: 'POST',

    const response = await requestUrl(options);

    if (response.status !== 200) {
        console.error(`Failed to fetch API data from ${channel}`);
        return \{\};
    }

    this.syncFile(file, response.json);
}

First we read the current Markdown file, retrieve the channel from the Frontmatter data and get the current year. Finally, we define how the folder should be named. Does the file have a parent directory? Then we take its name, otherwise the year. The last case shouldn't actually ever happen, but I want to be safe here.

How the actual API call works, I've already shown above. Because we are creating a new article, it is a POST request.

If the call fails for any reason, the error will be logged and the process aborted. If everything went well, we get the stored and enriched data back from the API.

If something goes utterly wrong, the catch block kicks in and raises the alarm.

catch (error) {
    console.error(`Error: Could not create page`);
    new Notice(`Error: Could not create page`);
    this.updateStatus('Error: Could not create page');
    return \{\};
}

First, the error is logged, I also show a notification, and at the bottom of the Obsidian status bar there's also a message. This should be hard to overlook.

Kirby plugin

As shown in the route above, Kirby receives the data and checks the token. Channel, folder, and slug are passed via the URL and directly into the route, the text is in the POST body and must be read out.

Next, I check if the folder even exists. If not, I stop immediately:

$parent = kirby()->page($channel . '/' . $folder);

if (is_null($parent)) {
    return new Response('Not Found', 'text/plain', 404);
}

Now it's time to bring the data into the Kirby format and save it.

$newData = [
    'title' => $requestData['frontmatter']['title'],
    'intro' => $requestData['frontmatter']['intro'],
    'text' => $requestData['content'],
    'tags' => $requestData['frontmatter']['tags'],
];

kirby()->impersonate('kirby');
$page = Page::create([
    'parent'  => $parent,
    'slug'     => Str::slug($requestData['frontmatter']['title']),
    'template' => 'post',
    'content' => $newData
]);

I'm building an array here that I can pass to the Page::create call. In addition, I'm also generating a slug from the title here. My standard post template serves as the template.

This creates a new page in Kirby and returns it directly to me, so I can immediately access it via $page.

First, though, I'm checking what status the page should have. I could set it up in Obsidian so that it's published directly:

if (in_array($requestData['frontmatter']['status'], ['listed', 'unlisted'])) {
    $page->changeStatus($requestData['frontmatter']['status']);
}

The initial state is always draft, so I only need to check for listed and unlisted here. If one of them is set, the page is published accordingly.

Now we are enriching the data further and sending it back to Obsidian:

$data = [
    'frontmatter' => [
        'tags' => $page->tags()->split(','),
        'date' => $page->date()->toDate('c'),
        'status' => $page->status(),
        'title' => $page->title()->value(),
        'intro' => $page->intro()->value(),
        'sync' => true,
        'slug' => $page->slug(),
        'channel' => $channel,
    ],
    'modified' => $page->modified('c'),
    'content' => $page->text()->value(),
    'folder' => $page->parent()->uid(),
    'headers' => $requestHeaders,
];

return new Response(json_encode($data), 'application/json');

It's important that I return the date and slug because Obsidian didn't have this information before. There is a method in the Obsidian plugin to update the local file which receives this data.

Receiving data in Obsidian

To prevent unnecessary updates and infinite loops, I first check if there are any changes at all:

const originalContent = await this.app.vault.read(file);
const frontmatter = this.app.metadataCache.getFileCache(file)?.frontmatter;

let updatedContent = originalContent;

I read out the current, local status. I write this to updatedcontent, which seems a bit odd, but it will be explained soon.

First, let's get to the metadata:

if (frontmatter) {
    const frontmatterEndIndex = originalContent.indexOf("---", 3) + 3;
    const existingFrontmatter = parseYaml(originalContent.slice(3, frontmatterEndIndex - 3));
    const updatedFrontmatter = { ...existingFrontmatter, ...apiData.frontmatter };
    const newFrontmatterBlock = `---\n${stringifyYaml(updatedFrontmatter)}---`;

    updatedContent = newFrontmatterBlock + originalContent.slice(frontmatterEndIndex);
} else {
    apiData.frontmatter.sync = true;
    updatedContent = `---\n${stringifyYaml(apiData.frontmatter)}---\n${originalContent}`;
}

I first create an object out of the Frontmatter text block. Then I simply overwrite it with the data from the API. I don't care about individual fields here, I'm pretty risk-taking. In the end, I write the data to updatedContent and append the text below.

If the file doesn't have any Frontmatter at all, I take the API data 1:1. This comes into play later when we create a new file in Obsidian with the data from Kirby.

Now I'm finally updating the actual content:

if (apiData.content) {
    const frontmatterEndIndex = updatedContent.indexOf("---", 3) + 3;
    updatedContent = updatedContent.slice(0, frontmatterEndIndex) + "\n" + apiData.content;
}

Here, I just take what the API delivers.

Now to the tricky part. The local filename should always correspond to the remote slug, so we have to work on that:

const newSlug = apiData.frontmatter.slug;
const currentTitle = file.name.replace(/\.md$/, ""); 
console.log(`Checking slug ${file.path} vs ${newSlug}.md`);

if(apiData.folder !== file.parent?.name) {
    console.log(`Moving file ${file.path} to ${apiData.folder}`);
    await this.moveFile(file, apiData.folder, newSlug);
} else if (newSlug && newSlug !== currentTitle) {
    console.log(`Renaming file ${file.path} to ${newSlug}.md`);
    await this.renameFile(file, newSlug);
}

I grab the slug from the API. I grab the current file name and remove the extension. If the folder has changed, I move the file and rename it after the new slug. If only the slug has changed, I rename the file.

This already anticipates a bit what we need for other syncs.

Finally, we write the file, if necessary:

if (originalContent !== updatedContent) {
    this.pluginStatus == "pulled"
    await this.app.vault.modify(file, updatedContent);
}

this.updateStatus('Sync Complete');

Case 2: Downloading a page

The second case deals with the opposite direction. There is already a page in Kirby, and I now want it in Obsidian as well. This usually is the case for older articles that I wrote before I had the plugins running.

For the Kirby part, this is the simplest case. The route listens to the channel, folder, and slug. It's a GET request. As always, the token is checked first, and then I retrieve the page:

$page = kirby()->page($channel . '/' . $folder . '/' . $slug);

if (is_null($page)) {
    return new Response('Not Found', 'text/plain', 404);
}

If it doesn't exist, it is acknowledged accordingly and the process ends. If the page exists, the same array as in Create is generated and returned to Obsidian.

To make this work, I need the possibility to specify folders and slug etc. in Obsidian, there is a dialog for that:

Very spartanic, I know, it's only supposed to work right now. I need Kirby's slug here. So I have to look into the panel. That's not quite optimal, but since the case is very rare, that's okay for me.

If I click submit the GET-Request will be sent:

const options = {
    url: `${this.settings.apiBaseUrl}/${channel}/${folder}/${slug}`,
    method: 'GET',
    headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${this.settings.apiToken}`
    },
};

Because I get the same data structure back here as with Create, I can simply call the same method at the end of case 1 in case of success. The logic remains the same. Kirby sends me all the data back, especially the slug, which is now used to name the file accordingly and possibly move it to the correct directory.

Case 3: Update Kirby with local data

In this case, both ends are already syncing. I now want to send targeted local changes to Kirby. A PUT request. On the Kirby side, this works almost exactly like a POST request.

I fetch the affected page again, create an array with the new data, and then call an update instead of a create:

$page->update($newData);

After that, I return the page data like during the create and Obsidian can update the local file if necessary.

Case 4: Update the local file

The last case. Here both ends are already syncing and we want to now update the changes I made in the panel locally as well.

Locally, we already have all the data we need to make the GET request. A piece of cake for Kirby. I get the page, read the data and deliver it again in the familiar format back to Obsidian.

Well, and Obsidian can then simply use the method from before to update the file again.

Fine-tuning

Boldly, I initially decided to always perform a sync and retrieve data from Kirby when opening a file locally (if the checkbox was set, of course).

I thought I could take the risk because I had Obsidian listening for changes and synchronizing towards Kirby whenever Obsidian saved the file.

This was a bit too bold and in production it led to data loss. I had written a lot locally, remotely there was still a "test" in the text, and somehow I managed to open the file without sending it to Kirby beforehand. Result: My text was gone, a cheerful "test" laughed at me. Thanks to Obsidian Sync, I could then restore the last state.

I then decided against automatic syncing...

From the beginning, I had the option to start a sync with a command. To achieve this, I open the command line with cmd+p and type kirby. Then I can choose what I want to do:

As you can see, all four cases are listed here. For "Create page from remote", the dialog shown above for entering the slug is openend. Otherwise, the described processes are triggered.

Such a command can be created in Obsidian as follows:

this.addCommand({
    id: 'kirby-pull-remote',
    name: 'Update local file',
    checkCallback: (checking: boolean) => {
        const activeFile = this.app.workspace.getActiveFile();

        if (activeFile && activeFile.extension === "md") {
            if (!checking) {
                this.pluginStatus = "pulling";
                this.handleFileModify(activeFile, true);
            }

            return true;
        }

        return false;
    }
});

A few check to make sure all prerequisites are met, then the sync is called. The other commands work similarly.

Finally, I'm also using a plugin that lets me give files icons. I use this plugin to set a suitable icon based on the file status, as you can see in the screenshot above.

Conclusion

I'm very satisfied with this setup so far. I always have everything on the same level. Thanks to Obsidian Sync, my plugin runs on all devices.

Of course, a little more happens, at the top you can already see the pluginStatus, which ensures that there are no loops and nothing overlaps. In Kirby, I convert the Obsidian syntax for images into a Kirbytag. From ! [ [obsidian-8.png ] ] it becomes ( image: obsidian-8.png ) when the Page is rendered in Kirby (I added the spaces so that the tags are not replaced).

Speaking of pictures! I don't synchronize them yet, so they have to be uploaded extra. That would be a nice feature for the future.

So I can merrily synchronize, my Obsidian looks great thanks to the theme and font, and I can write very pleasantly there. Thanks to Obsidian Sync, I'm not afraid of data loss and can access my Vault on all devices. I have good style and grammar checking via the plugin, and I can see the status of my files at a glance in the directory tree.

There is still massive potential for improvement. All of this is hacked together wildly because I've tried a lot and learned a lot. That's why both plugins are still a bit far from being publishable. And I'm not even sure how much need there is for that at all. You could write me a comment.


  1. CRUD stands for Create, Read, Update, Delete. 

]]>
Generating an Open Graph Image https://maurice-renck.de/en/learn/built-with-kirby/open-graph-image-plugin https://maurice-renck.de/en/@/page/i7dCu6VODO2yxXWK Mon, 30 Dec 2024 16:00:00 +0100 Maurice Renck kirby-cms

When a link to our website is shared, an automatically generated image should be displayed in the preview. We use my Kirby plugin for this.

OpenGraph Image - Maurice Renck
Das OG-Image-Plugin erzeugt für jede deiner Seiten automatisch ein Vorschaubild, welches dann beim Teilen oder einbetten angezeigt werden kann.

When we share our blog posts, notes, or other pages on Mastodon, Bluesky, or elsewhere, these posts should look good. It’s nice to have an image displayed in the preview. This is possible using specific meta tags. However, we don’t always have an image ready for every page we want to share. In such cases, we can automatically generate an image.

The plugin allows us to implement various scenarios. Essentially, we want to cover three cases:

  1. We already have a graphic to use when sharing.
  2. We don’t have a graphic, so a new one should be created.
  3. We have a graphic, but it should be incorporated into a newly created image.

The first case is straightforward: we already have an image and want to use it without changes.

The other two cases are slightly more complex. If no graphic is available, a new image should be created and used as-is. In the third case, we have a post image, but we don’t want to use it directly as the sharing image—instead, it should be enhanced with additional information.

All these cases will be handled using the OG Image plugin.

Installing the Plugin

The OG Image plugin can be installed like any other Kirby plugin. I recommend using Composer for installation because it makes updates easy without requiring us to check for them manually. To install the plugin, navigate to your website directory in the command line and run the following command:

composer require mauricerenck/ogimage

If you don’t want to use Composer, you can download a ZIP file here and extract it into your website directory under site/plugins1, creating an ogimage folder.

The plugin is now ready to use and can be configured. However, we’ll need a few things first.

Preparations

The plugin requires two files to function:

  1. A graphic template in PNG format
  2. A TrueType font

The template can theoretically be any size, but an image size of 1600x930 pixels is recommended. This is also the plugin's default setting. So first, create an image of this size. Here’s what mine looks like:

My Template

As you can see, I opted for a simple colored background. At the bottom, I show the website URL, and on the right side, I’ve cut out a transparent circle. I’ll explain why I did this later.

I also grabbed the font I use on my website.

Both files are placed in the assets/og-image directory. We’ll reference them in the configuration shortly.

Using the Template

Now let’s configure the plugin, which currently doesn’t know about our template or font.

Open Kirby’s configuration file located at site/config/config.php2.

Add a new configuration for the plugin and specify the path to the graphic template:

"mauricerenck.ogimage" => [
    "image.template" => "assets/og-image/template.png",
],

As you can see, this is a relative path, always starting from the document root of your Kirby site.

All future options will go within "mauricerenck.ogimage" => [ … ].

Setting the Title

Next, we configure the text. The OG image should display the page title in the chosen font. Start by specifying the path to the font file, just like we did for the template:

"font.path" => "assets/my-webfont.ttf",

At this point, you can already generate the image. Access a post in your browser and append /og-image to the URL to see the new image:

First Result

As you can see, the plugin used our template and inserted the title in the specified font. The transparent area is now filled with a color—we’ll fix that later.

First, let’s focus on the title. It’s too crammed into the top-left corner and is barely readable due to the color. We’ll now:

  1. Change the color
  2. Adjust the position
"font.color" => [255, 255, 255],
"title.position" => [70, 215],

We set the font color using font.color to an RGB value, in this case, white. The title position is set with title.position, where the array contains the X and Y values in pixels.

Reloading the image in the browser … nothing changes. The plugin doesn’t just display the image; it creates a file. This prevents the server from generating a new image every time someone views it, which could overwhelm the server if a link is shared widely.

To see the changes, delete the existing image first.

The filename depends on your Kirby configuration, especially if your site is multilingual. The pattern is generated-og-image.LANGUAGE.png. If no language is set, LANGUAGE defaults to default; otherwise, it uses the language code

After deletion and reloading, the new image looks like this:

Second Result

This looks better. The text is readable and no longer stuck in the corner. However, it extends into the colored area, and the line spacing feels too large. Let’s fix that:

"font.lineheight" => 1.5,
"title.charactersPerLine" => 16,

We set the line height with font.lineheight. With title.charactersPerLine, we define the maximum number of characters per line to prevent the text from spilling into the colored area.

These values depend heavily on your template and font. Experiment with a long title to find the best settings

Now the image looks like this:

Third Result

This is already much better. Now let’s tackle the three scenarios described earlier.

Three Variations

Pre-Made Graphics

If you already have a graphic to use when sharing, no new image needs to be created. The plugin can check for a custom OG image field. By default, it looks for a field named ogImage, but this can be changed in the configuration using the field option. In our case, we’ll keep the default and create the field in the panel:

ogImage:
    type: files
    label: OG Image
    layout: cards
    multiple: false
    image:
      cover: true

We configure the field as a file field that allows selecting only one image.

The plugin does the rest. When the OG image is requested, it checks the configured field. If an image exists, it’s used; if not, a new one is generated.

Embedding an Image

If a post image exists, it should be embedded into our OG image, which is why we created the transparent area on the right.

First, we tell the plugin where to find the post image. Then, we crop it to the appropriate size and position it:

"heroImage.field" => 'hero',
"heroImage.cropsize" => [610, 610],
"heroImage.position" => [900, 163],

Our circle is about 600 pixels, so we add some tolerance. The post image is in the hero field, cropped to 610x610 pixels. It’s positioned 900 pixels to the right and 163 pixels down.

Now set a post image and request the OG image:

Fourth Result

The plugin uses Kirby’s crop function. You can set a focal point in the panel to ensure the relevant part of the image is retained after cropping.

Filling the Circle

If no post image is available, the circle is currently filled with a color. We can change this color. In this variant, we’ll match the circle’s color to the template’s background, making it invisible:

"heroImage.fallbackColor" => [40, 46, 59],

Set an RGB value matching the background. Now the OG image looks like this:

Fifth Result

Simple but functional.

This feels too plain for me, so I’d rather display a logo. Instead of using a fallback color, we can configure a fallback image.

I created a new image containing the logo in the appropriate position:

The Fallback Image

Place this file in the assets/og-image directory as fallback.png. Update the configuration:

"heroImage.fallbackImage" => "assets/og-image/fallback.png",

Now the OG image looks like this:

OG Image with Fallback

This looks much better. You can further refine it by experimenting with font sizes and positions, but I’ll leave that to you.

To wrap up, here’s the complete configuration:

"mauricerenck.ogimage" => [
    "image.template" => "assets/og-image/template.png",

    "font.path" => "assets/spectral-regular-webfont.ttf",
    "font.color" => [255, 255, 255],
    "font.lineheight" => 1.5,

    "title.charactersPerLine" => 16,
    "title.position" => [70, 215],

    "heroImage.field" => 'hero',
    "heroImage.cropsize" => [610, 610],
    "heroImage.position" => [900, 163],
    "heroImage.fallbackColor" => [40, 46, 59],
    "heroImage.fallbackImage" => "assets/og-image/fallback.png",
],

We define the template, font path, color, and line height. We set the title position and line break rule. Then we configure the post image field, specify its crop and position, and set fallback options with a color or logo.

With this setup, every page will have an OG image, either manually defined in the panel or automatically generated with a fallback.

Enjoy experimenting!


  1. If you haven’t installed any plugins yet, the plugins folder might not exist. Simply create an empty directory. 

  2. If the configuration file doesn’t exist, you’ll need to create it. Check the Kirby documentation for guidance. 

]]>
Related Pages https://maurice-renck.de/en/learn/built-with-kirby/related-pages https://maurice-renck.de/en/@/page/ygeLYsF5TybOtlfP Fri, 16 Feb 2024 09:15:00 +0100 Maurice Renck kirby-cms

This is how I show related pages under every post on my site.

Below the posts on my website, I would like to recommend additional articles that fit the topic. In the initial versions of my website, I made these selections myself. Of course, over time, this became somewhat cumbersome, as new posts are constantly being added, and the relevance among them is constantly changing. So, I would have to repeatedly edit old articles to ensure that the connections remain accurate.

Naturally, I was not the only one who had something like this on their website. That's why corresponding Kirby plugins and this Cookbook article quickly emerged.

After some time, however, it became apparent to me that the plugins and examples offered did not deliver the results I had envisioned. Therefore, I decided to delve deeper into the topic and build my own solution.

Even beyond "Related Posts," I find the topic of "discoverability" extremely fascinating. We all probably have content somewhere on the web that we don't want to disappear into the crowd unnoticed. On a large scale, we want to be found in Search Engines, on a smaller scale, it might be recommendations to other posts on our own website.

For Kirby, there is currently a plugin that addresses the problem: Similar.

I had been using Similar for a while, but in some places, it wasn't precise enough for me, and I couldn't get it under control even by refining settings. So, I looked at how Sonja approached the issue, stole a few ideas, and adapted them to my site.

To find similar posts, I will rely on three content fields:

  1. Tags
  2. Text
  3. Title

Tags

I assign more or fewer tags to every post I publish. I use Kirby's Tag Field for this. My tags are global, meaning I fetch all tags from all pages and then display them as autocomplete.

In the panel, it looks like this:

The corresponding blueprint is as follows:

label: Tags
type: tags
options: query
query: site.index.pluck("tags", ",", true)
translate: false

Via query, I retrieve the contents of all tag fields to have them as autocomplete. Translation should not be possible; the tags apply to every language.

Using these tags as a source for relevant pages is straightforward. First, I fetch the tags of the current page. Then I look in all other published pages for the same tags:

$tags = $this->tags()->split(',');
$pagesWithTags = site()->index()->published()->filterBy('tags', 'in', $tags, ',')->not($this);

Since I call the search for related pages in a Page Method Plugin, $this refers to the current page in all these examples. After obtaining all tags of the current page, I filter all other pages by these tags. As a result, I get a list of pages that all share at least one tag with my current page.

Since the current page would always be included in the result, I exclude it; I certainly don't want to refer to the same page.

Now I could already list all pages with the same tags, but that's not precise enough for me. What I want more is to be able to assign a weighting to the three fields mentioned above. I stole that from Sonja.

To do this, I go through all found pages again, retrieve the individual tags of each page, and only return the tags that are also stored in the current page:

If my current page has the tags kirby, cms, and plugin, and a similar page has kirby, cms, and theme, I get the two identical tags kirby and cms as a result in this way.

To remember all similar pages, I fill an array. I don't want to store the entire page at this point, but only its ID, which I can then use to access it later.

Additionally, I want to know how relevant each page really is, so I count the identical tags and calculate a weighting from them. I can change how strongly each field should be weighted later in my config.php:

foreach($pagesWithTags as $page) {
    $pageTags = $page->tags()->split(',');
    $similarTags = array_intersect($pageTags, $tags);
    $uuid = $page->uuid()->toString();

    $similarPages[] = [
        'page' => $uuid,
        'weight' => count($similarTags) * option('mauricerenck.similar-related.tags.weight', 1),
    ];
}

As a result, I now have a list of pages and their respective weights.

Title

In the next step, I grab the title to recognize similarities here as well. This is not as straightforward as it was with the tags, but still relatively straightforward to implement.

To find similarities, I compare all titles word for word. To do this, I get the title of the current page and split it into its components. Then I go back to filtering all published pages, this time with my own filter.

For each of these pages, I get the title and split it into individual words. Now I have an array of words from both titles and can compare them. As with the tags, I now check if there is at least one match. If there is, I get the page as a result. And of course, I also want to exclude the current page here:

$wordsFromTitle = $this->title()->split(' ');
$pagesWithTitle = site()->index()->published()->filter(function($child) use($wordsFromTitle) {
    $wordsFromChildTitle = $child->title()->split(' ');
    return count(array_intersect($wordsFromTitle, $wordsFromChildTitle)) > 0;
})->not($this);

Next, I fill the result array again. I go through all found pages and weight them as before. This works just like with the tags:

foreach($pagesWithTitle as $page) {
    $wordsFromPageTitle = $page->title()->split(' ');
    $similarWords = array_intersect($wordsFromTitle, $wordsFromPageTitle);

    $uuid = $page->uuid()->toString();

    $similarPages[] = [
        'page' => $uuid,
        'weight' => count($similarWords) * option('mauricerenck.similar-related.title.weight', 0.5),
    ];
}

Text

Finally, we come to the most complex field type. I want to go so far as to compare every single word of the text with each other. This is a bit tricky because I also have to split the entire text into its individual words here. This can result in a pretty long word list for a long text and certainly does not contribute to the page loading faster. But more on that later.

I also use the field as a source here and split it into individual words. In this case, I only want words that are

longer than one character. This excludes some filler words like I or a in English:

$wordsFromText = $this->text()->split(' ');
$wordsFromText = array_filter($wordsFromText, function($word) {
    return strlen($word) > 1;
});

Now it's time to clean up; I only want "real" words:

$wordsFromText = array_map(function($word) {
    return preg_replace('/[^A-Za-z0-9\-]/', '', $word);
}, $wordsFromText);

Now it gets interesting. To get the most accurate result possible, I will exclude certain words. These are words that don't really have anything to do with the content. Hard to describe. Here's an example:

My page consists of this text:

I want to create a website with the CMS Kirby, for that I write myself a blueprint and a template

For my comparison, I'm not interested in certain words at all, like I, a, with, the, to, etc. These words occur in almost every text and would dilute the result. The really interesting words here are only words like website, CMS, Kirby, blueprint, and template.

If I were to write exclusively in German, I could make my life quite easy and just fetch all capitalized words. However, this doesn't work in English.

So, what to do? I have a very long list of so-called stopwords. These are filler words, as described above. Fortunately, I'm not the only one facing such a problem, and there are some well-maintained lists out there on the web. I opted for the ISO Stopwords, which come in several languages.

The data is available to me as a JSON file. I have to load the file. First, I get the language of the current page. In my case, it's either German or English:

$pageLanguage = kirby()->language()->code() ?? 'en';
$stopWordsForLanguage = [];

$languagesJson = file_get_contents(__DIR__ . '/stopwords-iso.json');

if($languagesJson !== false) {
    $stopWords = json_decode($languagesJson);
    $stopWordsForLanguage = (isset($stopWords->$pageLanguage)) ? $stopWords->$pageLanguage : $stopWords->en;
}

As a precaution, I check if the file could be loaded. If not, I just have an empty list. Otherwise, I check if the current language is present in the data. If not, my fallback kicks in, and I use English.

Now it's time to filter again. I remove all stopwords from the word list of the current page:

$wordsFromText = array_filter($wordsFromText, function($word) use($stopWordsForLanguage) {
    return !in_array(strtolower($word), $stopWordsForLanguage);
});

And now I go the usual route with a custom filter over all pages again. This time I compare the words of the text. Since I no longer have any stopwords in my source data, I don't have to filter them out for each individual page:

$pagesWithText = site()->index()->published()->filter(function($child) use($wordsFromText) {

    if($child->text()->isEmpty()) return false;
    $wordsFromChildText = $child->text()->split(' ');

    return count(array_intersect($wordsFromText, $wordsFromChildText)) > 0;

})->not($this);

I also check if the page really has a text. There could be pages that are just a listing or use blocks or layouts. I don't want them in my result and exclude them directly. I check the rest again for at least one match and include them accordingly in the list.

Now it's back to looping through all the results and filling the result arrays:

foreach($pagesWithText as $page) {

    $wordsFromPageText = $child->text()->split(' ');
    $similarWords = array_intersect($wordsFromText, $wordsFromPageText);
    $uuid = $page->uuid()->toString();

    $similarPages[] = [
        'page' => $uuid,
        'weight' => count($similarWords) * option('mauricerenck.similar-related.text.weight', 0.95),
    ];

}

Now I have, at best, a very long list of similar pages. Some of these pages have the same tags, some have a similar title or text. It is possible that pages occur multiple times, which is even very likely, because if the tags are already similar, then text fragments will probably also be similar. If a page has the tag kirby, the likelihood is quite high that the word Kirby appears in the text, and therefore the page is found twice in the list.

There are two ways to deal with this:

I could exclude duplicate pages from the beginning. If I already have a list of pages with similar tags, I could exclude these when querying similar titles. With the text, I could then already exclude pages that have both similar tags and titles.

But I want to be a bit smarter about it. I assume that a page that has similar tags and a similar title and a similar text is much more relevant than a page that only shares one tag or in which a few words are the same.

Therefore, the next step is to merge the data and weight the respective page, taking into account various occurrences:

$result = [];
foreach($similarPages as $page) {
    $uuid = $page['page'];

    if(!isset($result[$uuid])) {
        $result[$uuid] = [
            'page' => $uuid,
            'weight' => $page['weight'],
        ];
    } else {
        $result[$uuid]['weight'] += $page['weight'];
    }
}

First, I create an empty array for my result; then I go through all similar pages and get their UUID. If the page does not yet appear in my result list, I add it. I use its UUID as an array key, again I remember the UUID and the weighting.

If the page is already in the list, I don't add it again, but I add the weighting. So, if a page is in the list three times, namely with tags, title, and text, then all three weightings are added.

Finally, I have a list of all pages without duplicates, with the sum of their respective weightings. Now I want to sort them by weight:

usort($result, fn($a, $b) => $a['weight'] <=> $b['weight']);

Since I can't do much with an array of UUIDs in my template, my result is converted into a page collection in the last step:

$pages = array_map(function($page) {
    return page($page['page']);
}, array_reverse($result)) ?? [];

As a result, I now have a collection of all matching pages, with which I can now work in the template. So, I can output the title of each page with $pageFromCollection->title();, for example, or filter my collection again.

I still have to return my collection to the template:

return new Collection($pages);

I wrapped the whole construct into a plugin and provided it as a page method:

kirby::plugin('mauricerenck/related-pages', [
    'pageMethods' => [
    'relatedPages' => function () {
        // CODE
    }
];

In my template, I can simply call this method, get a collection of pages back, and do something with it. I limit it to three entries and then display them in a loop:

$related = $page->relatedPages()->limit(3);

// Render related pages

A Word of Caution

I'm very satisfied with the result. The related pages displayed on the site usually fit quite well. I'm considering whether to introduce the time aspect, i.e., weighting newer pages higher than older pages.

However, it must be said: On a website with a lot of pages and/or long texts, the approach could lead to problems. A lot happens, especially in the text comparison, and this can sometimes take a long time, in the worst case leading to timeouts or memory overflows.

It would probably be smarter, therefore, not to perform the whole process with every page request, but via cron job or hook. The result could then be stored in the respective page. Nothing needs to be calculated when the page is called up.

For me, this would be the next step for my little plugin. Currently, it theoretically still runs with every page request. I don't see this as so critical for my site because I cache all pages. The cache is only cleared when I update the code of the page or the content changes. Then the above procedure is run again on page request, but then only static HTML is served. I haven't noticed any slowdowns on my site so far.

I'm considering sorting my code a bit more, maybe incorporating the above comments, and then publishing it. However, only if there is interest - Sonja's plugin works excellently after all.

Let me know if you would use the plugin!

]]>
Blogroll https://maurice-renck.de/en/learn/built-with-kirby/blogroll https://maurice-renck.de/en/@/page/8aAmuSGf0ZCLYQRr Tue, 30 Jan 2024 20:00:00 +0100 Maurice Renck kirby-cms

This is how I realized the blogroll on my website

In my blog, there has been a blogroll for quite some time now. I have already described what it's all about here. At this point, I would like to explain how I implemented my blogroll using Kirby.

My blogroll is quite simple: In my blog blueprint, I have a structure field that consists of two fields:

  1. The URL of the blog
  2. The title of the blog

Previously, the blogroll was displayed in icon form under my blog listing. I want to expand it so that the blogroll is also accessible under /blog/blogroll, creating a deep link.

Since I don't want the blogroll to be a standalone page in the CMS, I will create a route for the blogroll and play a virtual page there.

The Route

In the site/config/config.php file, I create a new route. I want the blogroll to be accessible under blog/blogroll for all languages:

[
    'pattern' => 'blog/blogroll',
    'language' => '*',
    'action' => function ($language) {
      // […]
    }
]

This allows the page to be delivered under the corresponding URL. However, if we were to call this URL now, we would get a 404 error. A route always needs either an output or must redirect to another route.

Therefore, in the next step, I create a virtual page that we can return:

    $data = [
        'slug' => 'blogroll',
        'parent' => page('blog'),
        'template' => 'listing-blogroll',
        'translations' => []
    ];

    $page = Page::factory($data);
    site()->visit($page, $language);

    return $page;

First, I set the slug to 'blogroll', then I specify the parent page, in my case, the blog. Finally, the page needs a template to be rendered. Here, I use a special blogroll template. I will talk about translations later.

Template

I won't describe the complete template here, just the relevant parts. These are essentially the page header with a short description and the actual listing.

The page header consists of the title and a short description:

<div class="page-intro">
    <h1><?= $page->title(); ?></h1>
    <?= $page->intro()->kt(); ?>
</div>

The listing uses a snippet that I already use in my notes:

<ul class="note-list">
    <?php foreach ($blogroll as $blog) : ?>
        <?php $site->organism('list-entry-note', ['note' => $blog]); ?>
    <?php endforeach; ?>
</ul>

Here, it's worth noting that I use two site methods for snippets, which I have written myself. In this case, organism(). Basically, they work the same as Kirby's snippet() function. I use my methods to add a bit more organization to my snippets. I'll probably talk more about this in another post.

So, my list-entry-note snippet receives a blog from my blogroll and displays it. It needs three pieces of information:

  1. A title
  2. A URL
  3. An icon

The data comes from the corresponding controller.

The Controller

In Kirby, controllers are used to separate data logic from templates. I like this approach and try to keep my templates as simple as possible. Anything related to data should ideally happen in the controller.

This is what the controller for the blogroll looks like:

return function ($page) {
    $blogroll = page('blog')->blogroll()->toStructure();
    $blogEntries = [];

  foreach ($blogroll as $blog) {

    $url = str_replace(['https://', 'http://', '/'], ['', '', ''], Url::stripPath($blog->url()->value()));
    $icon = $page->getSiteIcon($url);

    $blogEntries[] = new StructureObject([
        'content' => [
            'title' => $blog->title(),
            'url' => $blog->url(),
            'icon' => $icon,
            'intendedTemplate' => 'blogroll'
        ]
    ]);
    }

    $blogrollStructure = new Structure($blogEntries);

  return [
    'blogroll' => $blogrollStructure,
  ];
};

First, I get the blog because that's where the data is stored. I directly access the blogroll field and retrieve it as a structure. Then, I populate the $blogEntries array with data. It gets the title, the URL, and an icon.

For the icon, I use a custom method that tries to fetch the favicon of a page and provides a fallback if necessary. The result is a data:image/png;base64 string, not a image URL. This allows me to cache the favicons for a while. I don't have to query numerous other sites on every page load (I'll talk more about this in another post).

Finally, I create a new structure from the array because my note entry snippet expects it that way (it usually receives a note, and that is a Kirby page).

Completing the Virtual Page

Now I'm almost there; a few pieces of information are still missing in the virtual page. It needs a title and a description, among other things. These are hidden in the translations. I decided not to maintain this data in the panel for now because I probably won't adjust it very often:

'translations' => [
    'en' => [
        'code' => 'en',
        'content' => [
            'title' => 'Blogroll',
            'date' => '2024-01-30',
            'intro' => 'Blogs I read regularly and can recommend.',
            ]
        ],
    'de' => [
        'code' => 'de',
        'content' => [
            'title' => 'Blogroll',
            'date' => '2024-01-30',
            'intro' => 'Blogs, die ich regelmäßig lese und die ich empfehlen kann.',
            'uuid' => Uuid::generate(),
        ]
    ]
],

As you can see, the necessary data is now in the virtual page. The German language variant additionally gets a UUID; it is my main language.

The Finished Blogroll

The page is now accessible under blog/blogroll. The template receives data from the controller and renders it using the snippet. You can see the result here.

I can continue to add new entries easily in the blog blueprint, and the virtual page takes care of the rest without having to create an extra page for it in the panel.

]]>
Crosspost from Mastodon to Bluesky https://maurice-renck.de/en/learn/tooling/crosspost-from-mastodon-to-bluesky https://maurice-renck.de/en/@/page/MrYmdXwbzDM0ajBX Thu, 11 Jan 2024 16:45:00 +0100 Maurice Renck indieweb

How to automatically send your Mastodon posts to Bluesky

Since there is so much interest in the tool, we decided to turn it into a service. You are welcome to register for the free beta phase if you do not want to host the script yourself.

Sociabli
Sociabli helps you to share your content online. Post once and let Sociabli sync to other services and platforms.

In addition to Mastodon, two other new networks have emerged: Threads.net and Bluesky. While Threads is currently working slowly on integrating ActivityPub and does not yet provide any API, Bluesky already has a few interfaces available.

Thanks to the API, it is possible to post without having to use the app or website. We can use this to write a script that helps us with crossposting.

Crossposting

The goal: Write a post on Mastodon and then automatically, shortly afterward, post it on Bluesky as well. We will achieve this with a small Node.js script that can run continuously in the background, taking care of the process for us.

Preparations

To post on Bluesky, we need an API access. It is advisable not to use regular Bluesky login credentials, but to create an AppPassword. To achieve this, go to Settings and create a new AppPassword. First, we need to set a name, which should be meaningful, for example, MastodonToBluesky.

A new password will be generated, which should be securely stored, preferably in our preferred password manager. Afterward, we won't be able to view the password on Bluesky.

To publish posts on Bluesky, we will use the official atproto package. It makes our lives quite easy, takes care of logging in with our login credentials, and helps us later in creating the post.

On the other side, we need to tap into Mastodon to get our latest posts. We could use the Mastodon API, but Mastodon willingly provides an Outbox for each account — completely without API. This Outbox responds to us with a JSON response containing all the data we need. Therefore, we don't need to bother with an API access; we can simply call the following URL:

https://INSTANCE/users/USERNAME/outbox?page=true

Of course, INSTANCE and USERNAME need to be replaced with the correct data; in my case, it would be mastodon.online and mauricerenck. For a quick check, simply open the URL in your browser, and you should see a JSON text response. The page parameter allows us to navigate through the records forward and backward.

The Workflow

Now that we have access to data and interfaces on both sides, let's take a brief look at how the finished script will work:

  • Every few minutes, we retrieve the Mastodon Outbox and fetch the latest posts.
  • We go through all posts and filter out replies, as they are not relevant for Bluesky.
  • We send each remaining post as a new post to Bluesky using the API.

Saving the Status

However, we have a problem: if we go through the list of Mastodon posts every few minutes, we won't always find exclusively new posts there. If we go through the posts each time and post them 1:1 on Bluesky, it will lead to a mass of duplicates. Therefore, we need to somehow remember which posts we have already forwarded to Bluesky. To accomplish this, we save the corresponding ID in a file. If our script crashes or is otherwise terminated or restarted, we can simply read this file again on the next start and know at which position we left off in the last run.

The Script

Let's start by creating two files:

  1. main.js
  2. lastProcessedPostId.txt

The file main.js contains our source code, and the second file is used to store the last processed Mastodon post ID. We initially enter a 0 in it. This is important!

When starting the script, we first read this last ID. To do this, we first set the path and filename for the file from which we can read this information. Then we read the file and store the ID in a variable:

// File to store the last processed Mastodon post ID
const lastProcessedPostIdFile = path.join(__dirname, 'lastProcessedPostId.txt');

// Variable to store the last processed Mastodon post ID
let lastProcessedPostId = loadLastProcessedPostId();

// Function to load the last processed post ID from the file
function loadLastProcessedPostId() {
  try {
    return fs.readFileSync(lastProcessedPostIdFile, 'utf8').trim();
  } catch (error) {
    console.error('Error loading last processed post ID:', error);
    return null;
  }
}

If the file lastProcessedPostId.txt cannot be read for any reason, we log an error message, and the script terminates. If everything goes well, we have the last ID stored in the variable lastProcessedPostId and can work with it.

Retrieving Posts from Mastodon

Now we can fetch new posts from Mastodon. For this, we need two pieces of information:

  • The Mastodon instance
  • The Mastodon username

Since this information is not security-relevant, we could write it directly into the code, but I don't recommend that and suggest storing it in a configuration file. Later, we will also store our Bluesky credentials there.

We use an environment file .env for this purpose. The advantage is that we can store sensitive information there without it accidentally ending up in the Git repository. We use the dotenv package for this.

So, we create a file named .env and save in it, in my case:

MASTODON_INSTANCE="https://mastodon.online"
MASTODON_USER="mauricerenck"

In our JavaScript file, after importing the dotenv package, we can access this information:

require('dotenv').config()
const mastodonInstance = process.env.MASTODON_INSTANCE;
const mastodonUser = process.env.MASTODON_USER;

Now we are ready and can retrieve the Outbox with a simple GET call. The response should contain a list of the latest posts. As we already know, we still need to filter this list a bit. We would rather not post replies because there will be no corresponding users on Bluesky, and we don't want to post duplicates, so we need to refer to our stored ID:

async function fetchNewPosts() {
  const response = await axios.get(`https://${mastodonInstance}/users/${mastodonUser}/outbox?page=true`);

  const reversed = response.data.orderedItems.filter(item => item.object.type === 'Note')
    .filter(item => item.object.inReplyTo === null)
    .reverse();

  let newTimestampId = 0;

  reversed.forEach(item => {
    const currentTimestampId = Date.parse(item.published);

    if(currentTimestampId > newTimestampId) {
      newTimestampId = currentTimestampId;
    }

   if(currentTimestampId > lastProcessedPostId && lastProcessedPostId != 0) {
      const text = removeHtmlTags(item.object.content);
      postToBluesky(text);
    }
  })

  if(newTimestampId > 0) {
    lastProcessedPostId = newTimestampId;
    saveLastProcessedPostId();
  }
}

We start with our GET request to retrieve the Mastodon JSON. Then we filter the posts so that we only get posts created by us and no replies. We save the result as an array, which we reverse so that the oldest post is at the top of the list, and the newest one is at the bottom. This way, we can simply go through the list from top to bottom and stay in the correct chronological order.

Each post has the publication time stored as a date. We use this information and create a JavaScript date from it, which we use as an ID. We could also use the ID generated by Mastodon, but since we want to stay chronological, we can assume that we don't need to look further once a date is older or equal to the last processed post.

If the timestamp is newer than the last recorded date, we send the post to Bluesky. Before that, we remove any possible HTML tags; we want pure text.

Finally, we save the newest ID so that we have it accessible the next time the script is started.

Posting to Bluesky

To post on Bluesky, we use the official @atproto/api package. For this, we need a few pieces of information to log in to the API:

  • The endpoint
  • The Handle (Username)
  • The Password

We also store this data in our .env file:

BLUESKY_ENDPOINT="https://bsky.social"
BLUESKY_HANDLE="USERNAME.bsky.social"
BLUESKY_PASSWORD="PASSWORD"

Currently, there is only one Bluesky instance, so the endpoint should look the same everywhere. For the password, we insert the previously created AppPassword.

Now it's time to write the function for posting. For this, we also create the Bluesky agent:

const agent = new BskyAgent({ service: process.env.BLUESKY_ENDPOINT });

Now we can post:

async function postToBluesky(text) {
  await agent.login({
    identifier: process.env.BLUESKY_HANDLE,
    password: process.env.BLUESKY_PASSWORD,
  });

  const richText = new RichText({ text });
  await richText.detectFacets(agent);
  await agent.post({
    text: richText.text,
    facets: richText.facets,
  });
}

We first log in via the agent. Then we pass the text of our Mastodon post to the RichText class. We use it so that we can post not only plain text but, for example, also links, which will be recognized as such.

Now we just need to call this workflow regularly:

setInterval(fetchNewPosts, 2 * 60 * 1000);

This way, we fetch new posts every two minutes and potentially forward them to Bluesky.

I have used a few other functions in the above examples that I don't want to go into detail about here because they are just small helpers. You can view the entire script, including these functions, here:

GitHub - mauricerenck/mastodon-to-bluesky: A Node.js script for crossposting from mastodon to bluesky
A Node.js script for crossposting from mastodon to bluesky - mauricerenck/mastodon-to-bluesky

Now we can simply let the script run continuously. For this, a Raspberry Pi is a suitable option. Alternatively, I often use Koyeb for such things.

The script now fetches new posts every two minutes, processes them, and forwards them to Bluesky if necessary. Since we haven't sent a single post over on the first call and would generate numerous new posts with a single stroke, the script includes a mechanism that only forwards posts that were published after the first call of the script.

With this, we have a well-functioning but quite rudimentary solution for crossposting. You can either let the script run as it is, or further expand and improve it. Please let me know what improvements you have made — feel free to share them here as a comment.

Good luck with the implementation!

]]>
Automatic social media images https://maurice-renck.de/en/learn/built-with-kirby/automatische-beitragsbilder https://maurice-renck.de/en/@/page/BMUWMci2L197IVG4 Fri, 10 Dec 2021 15:50:00 +0100 Maurice Renck kirby-cms

This way we can automatically generate post images for sharing on twitter and other platforms.

Update: I put my current plugin online here: https://github.com/mauricerenck/og-image it can be used as a Kirby plugin but I wont't maintain it and you have to adapt the code to your needs. Maybe this becomes an official plugin at some point, right now there is no time for that.

Whenever we share blogposts on twitter or other platforms, those links draw more attention if they come with an image. This is possible by using a twitter card or opengraph tags. Those are meta tags, which can be enriched with information like title, description and an image.

In my CMSI do have two fields for those meta tags. I can add some text and also upload an image. The problem is, I don't have images for each post and I don't like to search for generic StockImages for every single post. That's why I stole an idea from GitHub (and other ideas): I generate my own images!

I like how GitHub does this. This is how it looks for one of my repositories:

GitHubs og:image

Those image are generated automatically, that's what I want to achive.

For that I created a Affinity Designer Template. That's not needed but that way I do have some more creative possibilities. What app you use for that doesn't matter, as long as you get a PNG out of it. This is how it looks like:

My template for the og:image

Creating images with PHP

Let's get started. The image has to be filled with information. I want the title and a description to be displayed. I use Kirby as CMS and wrote a plugin for that. This way I do have everything in one place.

The first step is to create an image in PHP

$canvas = imagecreatetruecolor(1200, 600);

Our template image will be used a base, so let's load it and copy it into the newly generated image:

$background = imagecreatefrompng(__DIR__ . '/assets/background.png');

imagecopyresampled(
    $canvas,
    $background,
    0,
    0,
    0,
    0,
    imagesx($background),
    imagesy($background),
    imagesx($background),
    imagesy($background)
);

Now we will add the page title to the image. For that we use the imagettftext() function. We need a font to do so, let's load it:

$fontRegular = __DIR__ . '/assets/GangsterGrotesk-Regular.ttf';
$fontBold = __DIR__ . '/assets/GangsterGrotesk-Bold.ttf';

Depending on where the font is located you may want to change the path accordingly. I am loading two fonts, a regular one for the description and a bold one for the headline. Let's add the headline to the image:

[$titleX, $titleY] = imagettftext(
  $canvas,
  58,
  0,
  $margin,
  120,
  $black,
  $fontBold,
  $text
);

We're creating a text here and put it on our image ($canvas). The font size is 58 and there is a margin. Is use the variable $margin which is currently set to 60. The top margin is 120 I only use it once, so I don't store it in a variable. The headline should be black, for that I defined a color:

$black = imagecolorallocate($canvas, 0, 0, 0);

I use the bold font and hand in the text, which is stored in the $text variable. This is how it looks:

This looks okay. Unfortunately we do have a invisible problem, have a look:

If the title is too long, it flows out of the image. So have to define boundaries. That's not as easy as in an image editor. What we can do, is to add a line break every n characters. After how many characters is dependent of the font and font size. There is no general formular, we have to try. Those values are fitting for me:

$text = wordwrap($text, 30, "\n");

There will be a line break after 30 characters. This is how it looks like:

Better. The width of the text is save now. But it could flow out of the bottom of the image. We'l ignore that, the title shouldn't be so long.

Now we want to show the description. The font size should be a bit smaller, I also want it to be colored (purple), so I define another color and print out the text like this:

$purple = imagecolorallocate($canvas, 139, 126, 164);
$text = wordwrap($text, 48, "\n");

imagettftext(
    $canvas,
    24,
    0,
    $margin,
    $titleY + 70,
    $purple,
    $fontRegular,
    $text
);

Because the text is smaller the line breaks will be added after 48 chars. We use the regular font and a size of 24. Have a look at $titleY + 70. It's not possible to position text relatively, so we have to do some calcuations the prevent the description and title to overlap. To do so we use the return value of the title $titleY. And we add 70 pixels to that value, which will be the y position of the description. This way they will never overlap.



But there is still a problem with long titles. If the title occupies to much space, there won't be enough space left for the description. Have a look:



In this case, I decided not to show the description. A title that long has to be enough then. So we only show the description if the y position is lower than a certain value. Again, there is no formular for this, try it out by your own:

if ($titleY <= 315) {
    $text = wordwrap($text, 48, "\n");

    imagettftext(
        $canvas,
        24,
        0,
        $margin,
        $titleY + 70,
        $purple,
        $fontRegular,
        $text
    );
}


Okay, seems to work…

The last thing I want to do, is to add the url to the bottom of the image. Again we use a text for that, colored white:

imagettftext(
    $canvas,
    24,
    0,
    120,
    570,
    $white,
    $fontRegular,
    $url
);

Cool, that's fine for me! But some remarks:

Because my CMS does a lot of work for me, I didn't go into detail on some points. The description for example can be very long, too. So I tell Kirby to trim it:

$description = $page->intro()->excerpt(200);

Kirby will trim the text after 200 chars and add a . This could also be done directly with PHP by using substr:

$description = substr($description, 0, 200) . '…';

Again: Numbers depend on the font size.

We also somehow have to add the image to the page. We use meta tags for that:

<meta property="og:image" content="https://maurice-renck.de/de/your-website/automatische-beitragsbilder-im-blog/og-image">

<meta name="twitter:image" content="https://maurice-renck.de/de/your-website/automatische-beitragsbilder-im-blog/og-image">

I use two variants for that, the opengraph solution and the twitter card. Both have to be added in between <head></head>.

Kirby also comes with routing. Which means my plugin automatically generates the image, whenever /og-image is added to the page url. This way I don't have to do anything.

For my site I added the possibility to manually add an image in the Kirby Panel:

Whenever an image is set here, this one will be used, not the generated one. I can also set a special title and description.

I am quite happy with that solution. Maybe this is helpful for you, too.

Do you have a question or some feedback? Please write a comment.

]]>