cleanup + now topper
This commit is contained in:
parent
6c81b89874
commit
82604bd42b
38 changed files with 123 additions and 8 deletions
|
@ -0,0 +1,146 @@
|
|||
---
|
||||
title: 'Adding client-side rendered webmentions to my blog'
|
||||
date: '2023-02-09'
|
||||
draft: false
|
||||
tags: ['webmentions', 'development', 'javascript']
|
||||
---
|
||||
|
||||
My blog is currently hosted on weblog.lol which allows for a simple and configurable weblog managed in git with posts formatted in markdown. I wanted to add webmentions to my blog which, as of now, doesn't include a build step. To accomplish this, I've added an intermediary api endpoint to the same next.js app that powers my [/now](https://coryd.dev/now) page.<!-- excerpt -->
|
||||
|
||||
Robb has [a handy write up on adding webmentions to your website](https://rknight.me/adding-webmentions-to-your-site/), which I followed — first adding the appropriate Mastodon link to my blog template, registering for webmentions.up and Bridgy, then adding the appropriate tags to my template document's `<head>` to record mentions.
|
||||
|
||||
Next it was simply a question of rendering the output from the webmentions endpoint.
|
||||
|
||||
My next.js api looks like this:
|
||||
|
||||
```typescript
|
||||
export default async function handler(req: any, res: any) {
|
||||
const KEY_CORYD = process.env.API_KEY_WEBMENTIONS_CORYD_DEV
|
||||
const KEY_BLOG = process.env.API_KEY_WEBMENTIONS_BLOG_CORYD_DEV
|
||||
const DOMAIN = req.query.domain
|
||||
const TARGET = req.query.target
|
||||
const data = await fetch(
|
||||
`https://webmention.io/api/mentions.jf2?token=${
|
||||
DOMAIN === 'coryd.dev' ? KEY_CORYD : KEY_BLOG
|
||||
}${TARGET ? `&target=${TARGET}` : ''}&per-page=1000`
|
||||
).then((response) => response.json())
|
||||
res.json(data)
|
||||
}
|
||||
```
|
||||
|
||||
I have a pair of keys depending on the mentioned and provides domain, though this is only used on my blog at present. I also support passing through the `target` parameter but don't leverage it at the moment.
|
||||
|
||||
This is called on the client side as follows:
|
||||
|
||||
```javascript
|
||||
document.addEventListener('DOMContentLoaded', (event) => {
|
||||
;(function () {
|
||||
const formatDate = (date) => {
|
||||
var d = new Date(date),
|
||||
month = '' + (d.getMonth() + 1),
|
||||
day = '' + d.getDate(),
|
||||
year = d.getFullYear()
|
||||
|
||||
if (month.length < 2) month = '0' + month
|
||||
if (day.length < 2) day = '0' + day
|
||||
|
||||
return [month, day, year].join('-')
|
||||
}
|
||||
const webmentionsWrapper = document.getElementById('webmentions')
|
||||
const webmentionsLikesWrapper = document.getElementById('webmentions-likes-wrapper')
|
||||
const webmentionsBoostsWrapper = document.getElementById('webmentions-boosts-wrapper')
|
||||
const webmentionsCommentsWrapper = document.getElementById('webmentions-comments-wrapper')
|
||||
if (webmentionsWrapper && window) {
|
||||
try {
|
||||
fetch('https://utils.coryd.dev/api/webmentions?domain=blog.coryd.dev')
|
||||
.then((response) => response.json())
|
||||
.then((data) => {
|
||||
const mentions = data.children
|
||||
if (mentions.length === 0 || window.location.pathname === '/') {
|
||||
webmentionsWrapper.remove()
|
||||
return
|
||||
}
|
||||
|
||||
let likes = ''
|
||||
let boosts = ''
|
||||
let comments = ''
|
||||
|
||||
mentions.map((mention) => {
|
||||
if (
|
||||
mention['wm-property'] === 'like-of' &&
|
||||
mention['wm-target'].includes(window.location.href)
|
||||
) {
|
||||
likes += `<a href="${mention.url}" rel="noopener noreferrer"><img class="avatar" src="${mention.author.photo}" alt="${mention.author.name}" /></a>`
|
||||
}
|
||||
|
||||
if (
|
||||
mention['wm-property'] === 'repost-of' &&
|
||||
mention['wm-target'].includes(window.location.href)
|
||||
) {
|
||||
boosts += `<a href="${mention.url}" rel="noopener noreferrer"><img class="avatar" src="${mention.author.photo}" alt="${mention.author.name}" /></a>`
|
||||
}
|
||||
|
||||
if (
|
||||
mention['wm-property'] === 'in-reply-to' &&
|
||||
mention['wm-target'].includes(window.location.href)
|
||||
) {
|
||||
comments += `<div class="webmention-comment"><a href="${
|
||||
mention.url
|
||||
}" rel="noopener noreferrer"><div class="webmention-comment-top"><img class="avatar" src="${
|
||||
mention.author.photo
|
||||
}" alt="${mention.author.name}" /><div class="time">${formatDate(
|
||||
mention.published
|
||||
)}</div></div><div class="comment-body">${
|
||||
mention.content.text
|
||||
}</div></a></div>`
|
||||
}
|
||||
})
|
||||
|
||||
webmentionsLikesWrapper.innerHTML = ''
|
||||
webmentionsLikesWrapper.insertAdjacentHTML('beforeEnd', likes)
|
||||
webmentionsBoostsWrapper.innerHTML = ''
|
||||
webmentionsBoostsWrapper.insertAdjacentHTML('beforeEnd', boosts)
|
||||
webmentionsCommentsWrapper.innerHTML = ''
|
||||
webmentionsCommentsWrapper.insertAdjacentHTML('beforeEnd', comments)
|
||||
webmentionsWrapper.style.opacity = 1
|
||||
|
||||
if (likes === '')
|
||||
document.getElementById('webmentions-likes').innerHTML === ''
|
||||
if (boosts === '')
|
||||
document.getElementById('webmentions-boosts').innerHTML === ''
|
||||
if (comments === '')
|
||||
document.getElementById('webmentions-comments').innerHTML === ''
|
||||
|
||||
if (likes === '' && boosts === '' && comments === '')
|
||||
webmentionsWrapper.remove()
|
||||
})
|
||||
} catch (e) {
|
||||
webmentionsWrapper.remove()
|
||||
}
|
||||
}
|
||||
})()
|
||||
})
|
||||
```
|
||||
|
||||
This JavaScript is all quite imperative — it verifies the existence of the appropriate DOM nodes, concatenates templated HTML strings and then injects them into the targeted DOM elements. If there aren't mentions of a supported type, the container node is removed. If there are no mentions, the whole node is removed.
|
||||
|
||||
The webmentions HTML shell is as follows:
|
||||
|
||||
```html
|
||||
<div id="webmentions" class="background-purple container">
|
||||
<div id="webmentions-likes">
|
||||
<h2><i class="fa-solid fa-fw fa-star"></i> Likes</h2>
|
||||
<div id="webmentions-likes-wrapper"></div>
|
||||
</div>
|
||||
<div id="webmentions-boosts">
|
||||
<h2><i class="fa-solid fa-fw fa-rocket"></i> Boosts</h2>
|
||||
<div id="webmentions-boosts-wrapper"></div>
|
||||
</div>
|
||||
<div id="webmentions-comments">
|
||||
<h2><i class="fa-solid fa-fw fa-comment"></i> Comments</h2>
|
||||
<div id="webmentions-comments-wrapper"></div>
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
And there you have it — webmentions loaded client side and updated as they occur. There's an example visible on my post [Automating (and probably overengineering) my /now page](https://blog.coryd.dev/2023/02/automatingandprobablyoverengineeringmy-nowpage#webmentions).
|
390
src/posts/2023/automating-and-overengineering-my-now-page.md
Normal file
390
src/posts/2023/automating-and-overengineering-my-now-page.md
Normal file
|
@ -0,0 +1,390 @@
|
|||
---
|
||||
title: 'Automating (and probably overengineering) my /now page'
|
||||
date: '2023-02-06'
|
||||
draft: false
|
||||
tags: ['automation', 'development', 'nextjs', 'javascript']
|
||||
---
|
||||
|
||||
[omg.lol](https://home.omg.lol) (where I point my domain) and host most of my site content [recently launched support for /now pages](https://omglol.news/2023/01/16/now-pages-are-here).<!-- excerpt -->
|
||||
|
||||
**[nownownow.com](https://nownownow.com)**
|
||||
|
||||
> ...a link that says “**now**” goes to a page that tells you **what this person is focused on at this point in their life.** For short, we call it a “now page”.
|
||||
|
||||
This page can be updated manually but, as with just about everything offered by omg.lol, there's an API to submit updates to the page. I already blog infrequently and knew I would fail to manually update the page frequently, which presented an opportunity to automate updates to the page. My page is available at [coryd.dev/now](https://coryd.dev/now).
|
||||
|
||||
Borrowing from [Robb Knight](https://rknight.me) I started by creating a paste containing `yaml` with static text to fill out the top of my now page with brief details about family, work and hobbies (or lack thereof).
|
||||
|
||||
From there, I turned to the myriad content-based services I use to track what I'm listening to, what TV and movies I'm watching and what books I'm reading to source updates from.
|
||||
|
||||
I'm already exposing my most recently listened tracks and actively read books on my omg.lol home page/profile. This data is fetched from a [next.js](https://nextjs.org) application hosted over at [Vercel](https://vercel.com) that exposes a number of endpoints. For my music listening data, I'm using a route at `/api/music` that looks like this:
|
||||
|
||||
```typescript
|
||||
export default async function handler(req: any, res: any) {
|
||||
const KEY = process.env.API_KEY_LASTFM
|
||||
const METHODS: { [key: string]: string } = {
|
||||
default: 'user.getrecenttracks',
|
||||
albums: 'user.gettopalbums',
|
||||
artists: 'user.gettopartists',
|
||||
}
|
||||
const METHOD = METHODS[req.query.type] || METHODS['default']
|
||||
const data = await fetch(
|
||||
`http://ws.audioscrobbler.com/2.0/?method=${METHOD}&user=cdme_&api_key=${KEY}&limit=${
|
||||
req.query.limit || 20
|
||||
}&format=${req.query.format || 'json'}&period=${req.query.period || 'overall'}`
|
||||
).then((response) => response.json())
|
||||
res.json(data)
|
||||
}
|
||||
```
|
||||
|
||||
This API takes a type parameter and passes through several of Last.fm's stock parameters to allow it to be reused for my now listening display and the `/now` page.
|
||||
|
||||
Last.fm's API returns album images, but no longer returns artist images. To solve this, I've created an `/api/media` endpoint that checks for an available, static artist image and returns a placeholder if that check yields a 404. If a 404 is returned, I'm logging the missing artist name to a paste at omg.lol's paste.lol service:
|
||||
|
||||
```typescript
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
|
||||
export default async function handler(req: any, res: any) {
|
||||
const env = process.env.NODE_ENV
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
const ARTIST = req.query.artist
|
||||
const ALBUM = req.query.album
|
||||
const MEDIA = ARTIST ? 'artists' : 'albums'
|
||||
const MEDIA_VAL = ARTIST ? ARTIST : ALBUM
|
||||
|
||||
const data = await fetch(`${host}/media/${MEDIA}/${MEDIA_VAL}.jpg`)
|
||||
.then((response) => {
|
||||
if (response.status === 200) return `${host}/media/${MEDIA}/${MEDIA_VAL}.jpg`
|
||||
fetch(
|
||||
`${host}/api/omg/paste-edit?paste=404-images&editType=append&content=${MEDIA_VAL}`
|
||||
).then((response) => response.json())
|
||||
return `${host}/media/404.jpg`
|
||||
})
|
||||
.then((image) => image)
|
||||
res.redirect(data)
|
||||
}
|
||||
```
|
||||
|
||||
For my reading data, Oku.club exposes an [RSS feed](https://en.wikipedia.org/wiki/RSS) for all collection views. I'm using [@extractus/feed-extractor](https://www.npmjs.com/package/@extractus/feed-extractor) to transform that RSS feed to JSON and expose it as follows:
|
||||
|
||||
```typescript
|
||||
import { extract } from '@extractus/feed-extractor'
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
|
||||
export default async function handler(req: any, res: any) {
|
||||
const env = process.env.NODE_ENV
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
const url = `${host}/feeds/books`
|
||||
const result = await extract(url)
|
||||
res.json(result)
|
||||
}
|
||||
```
|
||||
|
||||
For television watched data, Trakt offers an RSS feed of my watched history, which is served as an endpoint as follows:
|
||||
|
||||
```typescript
|
||||
import { extract } from '@extractus/feed-extractor'
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
|
||||
export default async function handler(req: any, res: any) {
|
||||
const KEY = process.env.API_KEY_TRAKT
|
||||
const env = process.env.NODE_ENV
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
const url = `${host}/feeds/tv?slurm=${KEY}`
|
||||
const result = await extract(url, {
|
||||
getExtraEntryFields: (feedEntry) => {
|
||||
return {
|
||||
image: feedEntry['media:content']['@_url'],
|
||||
thumbnail: feedEntry['media:thumbnail']['@_url'],
|
||||
}
|
||||
},
|
||||
})
|
||||
res.json(result)
|
||||
}
|
||||
```
|
||||
|
||||
For movie data from Letterboxd we are, again, looking at transforming my profile RSS feed:
|
||||
|
||||
```typescript
|
||||
import { extract } from '@extractus/feed-extractor'
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
|
||||
export default async function handler(req: any, res: any) {
|
||||
const env = process.env.NODE_ENV
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
const url = `${host}/feeds/movies`
|
||||
const result = await extract(url)
|
||||
res.json(result)
|
||||
}
|
||||
```
|
||||
|
||||
This all comes together in yeat another, perhaps overwrought, endpoint at `/api/now`. Calls to this endpoint are authenticated with a bearer code and each endpoint response is configured to return JSON, Markdown and, in the case of sections with more complex layouts (music artists and albums), HTML. The contents of that endpoint are as follows:
|
||||
|
||||
```typescript
|
||||
import jsYaml from 'js-yaml'
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
import { listsToMarkdown } from '@/utils/transforms'
|
||||
import { getRandomIcon } from '@/utils/icons'
|
||||
import { nowResponseToMarkdown } from '@/utils/transforms'
|
||||
import { ALBUM_DENYLIST } from '@/utils/constants'
|
||||
|
||||
export default async function handler(req: any, res: any) {
|
||||
const env = process.env.NODE_ENV
|
||||
const { APP_KEY_OMG, API_KEY_OMG } = process.env
|
||||
const ACTION_KEY = req.headers.authorization?.split(' ')[1]
|
||||
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
|
||||
try {
|
||||
if (ACTION_KEY === APP_KEY_OMG) {
|
||||
const now = await fetch('https://api.omg.lol/address/cory/pastebin/now.yaml')
|
||||
.then((res) => res.json())
|
||||
.then((json) => {
|
||||
const now = jsYaml.load(json.response.paste.content)
|
||||
Object.keys(jsYaml.load(json.response.paste.content)).forEach((key) => {
|
||||
now[key] = listsToMarkdown(now[key])
|
||||
})
|
||||
|
||||
return { now }
|
||||
})
|
||||
|
||||
const books = await fetch(`${host}/api/books`)
|
||||
.then((res) => res.json())
|
||||
.then((json) => {
|
||||
const data = json.entries
|
||||
.slice(0, 5)
|
||||
.map((book: { title: string; link: string }) => {
|
||||
return {
|
||||
title: book.title,
|
||||
link: book.link,
|
||||
}
|
||||
})
|
||||
return {
|
||||
json: data,
|
||||
md: data
|
||||
.map((d: any) => {
|
||||
return `- [${d.title}](${d.link}) {${getRandomIcon('books')}}`
|
||||
})
|
||||
.join('\n'),
|
||||
}
|
||||
})
|
||||
|
||||
const movies = await fetch(`${host}/api/movies`)
|
||||
.then((res) => res.json())
|
||||
.then((json) => {
|
||||
const data = json.entries
|
||||
.slice(0, 5)
|
||||
.map((movie: { title: string; link: string; description: string }) => {
|
||||
return {
|
||||
title: movie.title,
|
||||
link: movie.link,
|
||||
desc: movie.description,
|
||||
}
|
||||
})
|
||||
return {
|
||||
json: data,
|
||||
md: data
|
||||
.map((d: any) => {
|
||||
return `- [${d.title}](${d.link}): ${d.desc} {${getRandomIcon(
|
||||
'movies'
|
||||
)}}`
|
||||
})
|
||||
.join('\n'),
|
||||
}
|
||||
})
|
||||
|
||||
const tv = await fetch(`${host}/api/tv`)
|
||||
.then((res) => res.json())
|
||||
.then((json) => {
|
||||
const data = json.entries
|
||||
.splice(0, 5)
|
||||
.map(
|
||||
(episode: {
|
||||
title: string
|
||||
link: string
|
||||
image: string
|
||||
thumbnail: string
|
||||
}) => {
|
||||
return {
|
||||
title: episode.title,
|
||||
link: episode.link,
|
||||
image: episode.image,
|
||||
thumbnail: episode.thumbnail,
|
||||
}
|
||||
}
|
||||
)
|
||||
return {
|
||||
json: data,
|
||||
html: data
|
||||
.map((d: any) => {
|
||||
return `<div class="container"><a href=${d.link} title='${d.title} by ${d.artist}'><div class='cover'></div><div class='details'><div class='text-main'>${d.title}</div></div><img src='${d.thumbnail}' alt='${d.title}' /></div></a>`
|
||||
})
|
||||
.join('\n'),
|
||||
md: data
|
||||
.map((d: any) => {
|
||||
return `- [${d.title}](${d.link}) {${getRandomIcon('tv')}}`
|
||||
})
|
||||
.join('\n'),
|
||||
}
|
||||
})
|
||||
|
||||
const musicArtists = await fetch(
|
||||
`https://utils.coryd.dev/api/music?type=artists&period=7day&limit=8`
|
||||
)
|
||||
.then((res) => res.json())
|
||||
.then((json) => {
|
||||
const data = json.topartists.artist.map((a: any) => {
|
||||
return {
|
||||
artist: a.name,
|
||||
link: `https://rateyourmusic.com/search?searchterm=${encodeURIComponent(
|
||||
a.name
|
||||
)}`,
|
||||
image: `${host}/api/media?artist=${a.name
|
||||
.replace(/\s+/g, '-')
|
||||
.toLowerCase()}`,
|
||||
}
|
||||
})
|
||||
return {
|
||||
json: data,
|
||||
html: data
|
||||
.map((d: any) => {
|
||||
return `<div class="container"><a href=${d.link} title='${d.title} by ${d.artist}'><div class='cover'></div><div class='details'><div class='text-main'>${d.artist}</div></div><img src='${d.image}' alt='${d.artist}' /></div></a>`
|
||||
})
|
||||
.join('\n'),
|
||||
md: data
|
||||
.map((d: any) => {
|
||||
return `- [${d.artist}](${d.link}) {${getRandomIcon('music')}}`
|
||||
})
|
||||
.join('\n'),
|
||||
}
|
||||
})
|
||||
|
||||
const musicAlbums = await fetch(
|
||||
`https://utils.coryd.dev/api/music?type=albums&period=7day&limit=8`
|
||||
)
|
||||
.then((res) => res.json())
|
||||
.then((json) => {
|
||||
const data = json.topalbums.album.map((a: any) => ({
|
||||
title: a.name,
|
||||
artist: a.artist.name,
|
||||
link: `https://rateyourmusic.com/search?searchterm=${encodeURIComponent(
|
||||
a.name
|
||||
)}`,
|
||||
image: !ALBUM_DENYLIST.includes(a.name.replace(/\s+/g, '-').toLowerCase())
|
||||
? a.image[a.image.length - 1]['#text']
|
||||
: `${host}/api/media?album=${a.name
|
||||
.replace(/\s+/g, '-')
|
||||
.toLowerCase()}`,
|
||||
}))
|
||||
return {
|
||||
json: data,
|
||||
html: data
|
||||
.map((d: any) => {
|
||||
return `<div class="container"><a href=${d.link} title='${d.title} by ${d.artist}'><div class='cover'></div><div class='details'><div class='text-main'>${d.title}</div><div class='text-secondary'>${d.artist}</div></div><img src='${d.image}' alt='${d.title} by ${d.artist}' /></div></a>`
|
||||
})
|
||||
.join('\n'),
|
||||
md: data
|
||||
.map((d: any) => {
|
||||
return `- [${d.title}](${d.link}) by ${d.artist} {${getRandomIcon(
|
||||
'music'
|
||||
)}}`
|
||||
})
|
||||
.join('\n'),
|
||||
}
|
||||
})
|
||||
|
||||
fetch('https://api.omg.lol/address/cory/now', {
|
||||
method: 'post',
|
||||
headers: {
|
||||
Authorization: `Bearer ${API_KEY_OMG}`,
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
content: nowResponseToMarkdown({
|
||||
now,
|
||||
books,
|
||||
movies,
|
||||
tv,
|
||||
music: {
|
||||
artists: musicArtists,
|
||||
albums: musicAlbums,
|
||||
},
|
||||
}),
|
||||
listed: 1,
|
||||
}),
|
||||
})
|
||||
|
||||
res.status(200).json({ success: true })
|
||||
} else {
|
||||
res.status(401).json({ success: false })
|
||||
}
|
||||
} catch (err) {
|
||||
res.status(500).json({ success: false })
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This endpoint also supports a denylist for albums returned from last.fm that might not be appropriate to display in polite company — if an album is in the denylist we look for an alternate, statically hosted cover or serve our 404 placeholder if one isn't readily available.
|
||||
|
||||
For items displayed from Markdown I'm attaching a random FontAwesome icon (e.g. `getRandomIcon('music')`):
|
||||
|
||||
```typescript
|
||||
export const getRandomIcon = (type: string) => {
|
||||
const icons = {
|
||||
books: ['book', 'book-bookmark', 'book-open', 'book-open-reader', 'bookmark'],
|
||||
music: ['music', 'headphones', 'record-vinyl', 'radio', 'guitar', 'compact-disc'],
|
||||
movies: ['film', 'display', 'video', 'ticket'],
|
||||
tv: ['tv', 'display', 'video'],
|
||||
}
|
||||
|
||||
return icons[type][Math.floor(Math.random() * (icons[type].length - 1 - 0))]
|
||||
}
|
||||
```
|
||||
|
||||
As the final step to wrap this up, calls to `/api/now` are made every 8 hours using a GitHub action:
|
||||
|
||||
```yaml
|
||||
name: scheduled-cron-job
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 */8 * * *'
|
||||
jobs:
|
||||
cron:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: scheduled-cron-job
|
||||
run: |
|
||||
curl -X POST 'https://utils.coryd.dev/api/now' \
|
||||
-H 'Authorization: Bearer ${{ secrets.ACTION_KEY }}'
|
||||
```
|
||||
|
||||
This endpoint can also be manually called using another workflow:
|
||||
|
||||
```yaml
|
||||
name: manual-job
|
||||
on: [workflow_dispatch]
|
||||
jobs:
|
||||
cron:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: manual-job
|
||||
run: |
|
||||
curl -X POST 'https://utils.coryd.dev/api/now' \
|
||||
-H 'Authorization: Bearer ${{ secrets.ACTION_KEY }}'
|
||||
```
|
||||
|
||||
So far this works seamlessly — if I want to update or add static content I can do so via my yaml paste at paste.lol and the change will roll out in due time.
|
||||
|
||||
Questions? Comments? Feel free to get in touch:
|
||||
|
||||
- [Email](mailto:hi@coryd.dev)
|
||||
- [Mastodon](https://social.lol/@cory)
|
||||
|
||||
---
|
||||
|
||||
Robb Knight has a [great post](https://rknight.me/automating-my-now-page/) on his process for automating his `/now` page using [Eleventy](https://www.11ty.dev) and mirroring it to omg.lol.
|
254
src/posts/2023/automating-rss-syndication-with-nextjs-github.md
Normal file
254
src/posts/2023/automating-rss-syndication-with-nextjs-github.md
Normal file
|
@ -0,0 +1,254 @@
|
|||
---
|
||||
title: 'Automating RSS syndication and sharing with Next.js and GitHub'
|
||||
date: 2023-02-23
|
||||
draft: false
|
||||
tags: ['nextjs', 'rss', 'automation', 'github']
|
||||
---
|
||||
|
||||
I wrote a basic syndication tool in Next.js to automate sharing items from configured RSS feeds to Mastodon. This tool works by leveraging a few basic configurations, the Mastodon API and a (reasonably) lightweight script that creates a JSON cache when initialized and posts new items on an hourly basis.<!-- excerpt -->
|
||||
|
||||
The script that handles this functionality lives at `lib/syndicate/index.ts`:
|
||||
|
||||
```typescript
|
||||
import { toPascalCase } from '@/utils/formatters'
|
||||
import { extract, FeedEntry } from '@extractus/feed-extractor'
|
||||
import { SERVICES, TAGS } from './config'
|
||||
import createMastoPost from './createMastoPost'
|
||||
|
||||
export default async function syndicate(init?: string) {
|
||||
const TOKEN_CORYDDEV_GISTS = process.env.TOKEN_CORYDDEV_GISTS
|
||||
const GIST_ID_SYNDICATION_CACHE = '406166f337b9ed2d494951757a70b9d1'
|
||||
const GIST_NAME_SYNDICATION_CACHE = 'syndication-cache.json'
|
||||
const CLEAN_OBJECT = () => {
|
||||
const INIT_OBJECT = {}
|
||||
Object.keys(SERVICES).map((service) => (INIT_OBJECT[service] = []))
|
||||
return INIT_OBJECT
|
||||
}
|
||||
|
||||
async function hydrateCache() {
|
||||
const CACHE_DATA = CLEAN_OBJECT()
|
||||
for (const service in SERVICES) {
|
||||
const data = await extract(SERVICES[service])
|
||||
const entries = data?.entries
|
||||
entries.map((entry: FeedEntry) => CACHE_DATA[service].push(entry.id))
|
||||
}
|
||||
await fetch(`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
Authorization: `Bearer ${TOKEN_CORYDDEV_GISTS}`,
|
||||
'Content-Type': 'application/vnd.github+json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
gist_id: GIST_ID_SYNDICATION_CACHE,
|
||||
files: {
|
||||
'syndication-cache.json': {
|
||||
content: JSON.stringify(CACHE_DATA),
|
||||
},
|
||||
},
|
||||
}),
|
||||
})
|
||||
.then((response) => response.json())
|
||||
.catch((err) => console.log(err))
|
||||
}
|
||||
|
||||
const DATA = await fetch(`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`).then(
|
||||
(response) => response.json()
|
||||
)
|
||||
const CONTENT = DATA?.files[GIST_NAME_SYNDICATION_CACHE].content
|
||||
|
||||
// rewrite the sync data if init is reset
|
||||
if (CONTENT === '' || init === 'true') hydrateCache()
|
||||
|
||||
if (CONTENT && CONTENT !== '' && !init) {
|
||||
const existingData = await fetch(
|
||||
`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`
|
||||
).then((response) => response.json())
|
||||
const existingContent = JSON.parse(existingData?.files[GIST_NAME_SYNDICATION_CACHE].content)
|
||||
|
||||
for (const service in SERVICES) {
|
||||
const data = await extract(SERVICES[service], {
|
||||
getExtraEntryFields: (feedEntry) => {
|
||||
return {
|
||||
tags: feedEntry['cd:tags'],
|
||||
}
|
||||
},
|
||||
})
|
||||
const entries: (FeedEntry & { tags?: string })[] = data?.entries
|
||||
if (!existingContent[service].includes(entries[0].id)) {
|
||||
let tags = ''
|
||||
if (entries[0].tags) {
|
||||
entries[0].tags
|
||||
.split(',')
|
||||
.forEach((a, index) =>
|
||||
index === 0
|
||||
? (tags += `#${toPascalCase(a)}`)
|
||||
: (tags += ` #${toPascalCase(a)}`)
|
||||
)
|
||||
tags += ` ${TAGS[service]}`
|
||||
} else {
|
||||
tags = TAGS[service]
|
||||
}
|
||||
existingContent[service].push(entries[0].id)
|
||||
createMastoPost(`${entries[0].title} ${entries[0].link} ${tags}`)
|
||||
await fetch(`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`, {
|
||||
method: 'PATCH',
|
||||
headers: {
|
||||
Authorization: `Bearer ${TOKEN_CORYDDEV_GISTS}`,
|
||||
'Content-Type': 'application/vnd.github+json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
gist_id: GIST_ID_SYNDICATION_CACHE,
|
||||
files: {
|
||||
'syndication-cache.json': {
|
||||
content: JSON.stringify(existingContent),
|
||||
},
|
||||
},
|
||||
}),
|
||||
})
|
||||
.then((response) => response.json())
|
||||
.catch((err) => console.log(err))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
We start off with an optional `init` parameter that can be passed into our `syndicate` function to hydrate our syndication cache — the structure of this cache is essentially `SERIVCE_KEY: string[]` where `string[]` contains RSS post IDs. Now, given that Vercel is intended as front end hosting, I needed a reasonably simple and reliable solution for hosting a simple JSON object. I explored and didn't want to involve a full-fledged database or storage solution and wasn't terribly interested in dealing with S3 or B2 for this purpose so I, instead, went with a "secret" GitHub gist[^1] and leveraged the GitHub API for storage. At each step of the [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) process in this script we make a call to the GitHub API using a token for authentication, deal with the returned JSON and go on our merry way.
|
||||
|
||||
Once the cache is hydrated the script will check the feeds available in `lib/syndicate/config.ts` and post the most recent item if it does not exist in the cache and then add it to said cache. The configured services are simply:
|
||||
|
||||
```typescript
|
||||
export const SERVICES = {
|
||||
'coryd.dev': 'https://coryd.dev/feed.xml',
|
||||
glass: 'https://glass.photo/coryd/rss',
|
||||
letterboxd: 'https://letterboxd.com/cdme/rss/',
|
||||
}
|
||||
```
|
||||
|
||||
As we iterate through this object we also attach tags specific to each service using an object shaped exactly like `SERVICES` in `config.ts`:
|
||||
|
||||
```typescript
|
||||
export const TAGS = {
|
||||
'coryd.dev': '#Blog',
|
||||
glass: '#Photo #Glass',
|
||||
letterboxd: '#Movie #Letterboxd',
|
||||
}
|
||||
```
|
||||
|
||||
This is partly for discovery and partly a consistent way for folks to filter my automated nonsense should they so choose. The format of Glass and Letterboxd are consistent and the tags are as well — for posts from my site (like this one 👋🏻) I start with `#Blog` and have also modified the structure of my RSS feed to expose the tags I add to each post. The feed is generated by a script that runs at build time called `generate-rss.ts` which looks like:
|
||||
|
||||
```typescript
|
||||
import { escape } from '@/lib/utils/htmlEscaper'
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
import { PostFrontMatter } from 'types/PostFrontMatter'
|
||||
|
||||
const generateRssItem = (post: PostFrontMatter) => `
|
||||
<item>
|
||||
<guid>${siteMetadata.siteUrl}/blog/${post.slug}</guid>
|
||||
<title>${escape(post.title)}</title>
|
||||
<link>${siteMetadata.siteUrl}/blog/${post.slug}</link>
|
||||
${post.summary && `<description>${escape(post.summary)}</description>`}
|
||||
<pubDate>${new Date(post.date).toUTCString()}</pubDate>
|
||||
<author>${siteMetadata.email} (${siteMetadata.author})</author>
|
||||
${post.tags && post.tags.map((t) => `<category>${t}</category>`).join('')}
|
||||
<cd:tags>${post.tags}</cd:tags>
|
||||
</item>
|
||||
`
|
||||
|
||||
const generateRss = (posts: PostFrontMatter[], page = 'feed.xml') => `
|
||||
<rss version="2.0"
|
||||
xmlns:cd="https://coryd.dev/rss"
|
||||
xmlns:atom="http://www.w3.org/2005/Atom">
|
||||
<channel>
|
||||
<title>${escape(siteMetadata.title)}</title>
|
||||
<link>${siteMetadata.siteUrl}/blog</link>
|
||||
<description>${escape(siteMetadata.description.default)}</description>
|
||||
<language>${siteMetadata.language}</language>
|
||||
<managingEditor>${siteMetadata.email} (${siteMetadata.author})</managingEditor>
|
||||
<webMaster>${siteMetadata.email} (${siteMetadata.author})</webMaster>
|
||||
<lastBuildDate>${new Date(posts[0].date).toUTCString()}</lastBuildDate>
|
||||
<atom:link href="${
|
||||
siteMetadata.siteUrl
|
||||
}/${page}" rel="self" type="application/rss+xml"/>
|
||||
${posts.map(generateRssItem).join('')}
|
||||
</channel>
|
||||
</rss>
|
||||
`
|
||||
export default generateRss
|
||||
```
|
||||
|
||||
I've added a new namespace to the parent `<rss...>` tag called `cd`[^2] — the declaration points to a page at this site that (very) briefly explains the purpose, I then created a `<cd:tags>` field that exposes a comma delimited list of post tags.
|
||||
|
||||
Back in `syndicate/index.ts`, this field is accessed when the RSS feed is parsed:
|
||||
|
||||
```typescript
|
||||
const data = await extract(SERVICES[service], {
|
||||
getExtraEntryFields: (feedEntry) => {
|
||||
return {
|
||||
tags: feedEntry['cd:tags'],
|
||||
}
|
||||
},
|
||||
})
|
||||
...
|
||||
let tags = ''
|
||||
if (entries[0].tags) {
|
||||
entries[0].tags
|
||||
.split(',')
|
||||
.forEach((a, index) =>
|
||||
index === 0
|
||||
? (tags += `#${toPascalCase(a)}`)
|
||||
: (tags += ` #${toPascalCase(a)}`)
|
||||
)
|
||||
tags += ` ${TAGS[service]}`
|
||||
} else {
|
||||
tags = TAGS[service]
|
||||
}
|
||||
```
|
||||
|
||||
Tags get transformed to Pascal case, prepended with `#` and sent off to be posted to Mastodon along with the static service-specific tags.
|
||||
|
||||
The function that posts content to Mastodon is as simple as the following:
|
||||
|
||||
```typescript
|
||||
import { MASTODON_INSTANCE } from './config'
|
||||
const KEY = process.env.API_KEY_MASTODON
|
||||
|
||||
const createMastoPost = async (content: string) => {
|
||||
const formData = new FormData()
|
||||
formData.append('status', content)
|
||||
|
||||
const res = await fetch(`${MASTODON_INSTANCE}/api/v1/statuses`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
Authorization: `Bearer ${KEY}`,
|
||||
},
|
||||
body: formData,
|
||||
})
|
||||
return res.json()
|
||||
}
|
||||
|
||||
export default createMastoPost
|
||||
```
|
||||
|
||||
Back at GitHub, this is all kicked off every hour on the hour using the following workflow:
|
||||
|
||||
```yaml
|
||||
name: scheduled-cron-job
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 * * * *'
|
||||
jobs:
|
||||
cron:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: scheduled-cron-job
|
||||
run: |
|
||||
curl -X POST 'https://coryd.dev/api/syndicate' \
|
||||
-H 'Authorization: Bearer ${{ secrets.VERCEL_SYNDICATE_KEY }}'
|
||||
```
|
||||
|
||||
Now, as I post things elsewhere, they'll make their way back to Mastodon with a simple title, link and tag set. Read them if you'd like, or filter them out altogether.
|
||||
|
||||
[^1]: It's secret inasmuch as it's obscured and, hence, not secured (which is also why `syndicate.ts` includes the gist ID directly) — it's all public post IDs, so peruse as one sees fit.
|
||||
[^2]: Not very creative, I know.
|
|
@ -0,0 +1,362 @@
|
|||
---
|
||||
title: 'Building a now page using Next.js and social APIs'
|
||||
date: 2023-02-20
|
||||
draft: false
|
||||
tags: ['nextjs', 'web development', 'react', 'api']
|
||||
---
|
||||
|
||||
With my personal site now sitting at Vercel and written in Next.js I decided to rework my [now](https://coryd.dev/now) page by leveraging a variety of social APIs. I kicked things off by looking through various platforms I use regularly and tracking down those that provide either API access or RSS feeds. For those with APIs I wrote code to access my data via said APIs, for those with feeds only I've leveraged [@extractus/feed-extractor](https://www.npmjs.com/package/@extractus/feed-extractor) to transform them to JSON responses.<!-- excerpt -->
|
||||
|
||||
The `/now` template in my `pages` directory looks like the following:
|
||||
|
||||
```jsx
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
import loadNowData from '@/lib/now'
|
||||
import { useJson } from '@/hooks/useJson'
|
||||
import Link from 'next/link'
|
||||
import { PageSEO } from '@/components/SEO'
|
||||
import { Spin } from '@/components/Loading'
|
||||
import {
|
||||
MapPinIcon,
|
||||
CodeBracketIcon,
|
||||
MegaphoneIcon,
|
||||
CommandLineIcon,
|
||||
} from '@heroicons/react/24/solid'
|
||||
import Status from '@/components/Status'
|
||||
import Albums from '@/components/media/Albums'
|
||||
import Artists from '@/components/media/Artists'
|
||||
import Reading from '@/components/media/Reading'
|
||||
import Movies from '@/components/media/Movies'
|
||||
import TV from '@/components/media/TV'
|
||||
|
||||
const env = process.env.NODE_ENV
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
|
||||
export async function getStaticProps() {
|
||||
return {
|
||||
props: await loadNowData('status,artists,albums,books,movies,tv'),
|
||||
revalidate: 3600,
|
||||
}
|
||||
}
|
||||
|
||||
export default function Now(props) {
|
||||
const { response, error } = useJson(`${host}/api/now`, props)
|
||||
const { status, artists, albums, books, movies, tv } = response
|
||||
|
||||
if (error) return null
|
||||
if (!response) return <Spin className="my-2 flex justify-center" />
|
||||
|
||||
return (
|
||||
<>
|
||||
<PageSEO
|
||||
title={`Now - ${siteMetadata.author}`}
|
||||
description={siteMetadata.description.now}
|
||||
/>
|
||||
<div className="divide-y divide-gray-200 dark:divide-gray-700">
|
||||
<div className="space-y-2 pt-6 pb-8 md:space-y-5">
|
||||
<h1 className="text-3xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-4xl sm:leading-10 md:text-6xl md:leading-14">
|
||||
Now
|
||||
</h1>
|
||||
</div>
|
||||
<div className="pt-12">
|
||||
<h3 className="text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-2xl sm:leading-10 md:text-4xl md:leading-14">
|
||||
Currently
|
||||
</h3>
|
||||
<div className="pl-5 md:pl-10">
|
||||
<Status status={status} />
|
||||
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
|
||||
<MapPinIcon className="mr-1 inline h-6 w-6" />
|
||||
Living in Camarillo, California with my beautiful family, 4 rescue dogs and
|
||||
a guinea pig.
|
||||
</p>
|
||||
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
|
||||
<CodeBracketIcon className="mr-1 inline h-6 w-6" />
|
||||
Working at <Link
|
||||
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
|
||||
href="https://hashicorp.com"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
HashiCorp
|
||||
</Link>
|
||||
</p>
|
||||
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
|
||||
<MegaphoneIcon className="mr-1 inline h-6 w-6" />
|
||||
Rooting for the{` `}
|
||||
<Link
|
||||
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
|
||||
href="https://lakers.com"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
Lakers
|
||||
</Link>
|
||||
, for better or worse.
|
||||
</p>
|
||||
</div>
|
||||
<h3 className="pt-6 text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-2xl sm:leading-10 md:text-4xl md:leading-14">
|
||||
Making
|
||||
</h3>
|
||||
<div className="pl-5 md:pl-10">
|
||||
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
|
||||
<CommandLineIcon className="mr-1 inline h-6 w-6" />
|
||||
Hacking away on random projects like this page, my <Link
|
||||
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
|
||||
href="/blog"
|
||||
passHref
|
||||
>
|
||||
blog
|
||||
</Link> and whatever else I can find time for.
|
||||
</p>
|
||||
</div>
|
||||
<Artists artists={artists} />
|
||||
<Albums albums={albums} />
|
||||
<Reading books={books} />
|
||||
<Movies movies={movies} />
|
||||
<TV tv={tv} />
|
||||
<p className="pt-8 text-center text-xs text-gray-900 dark:text-gray-100">
|
||||
(This is a{' '}
|
||||
<Link
|
||||
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
|
||||
href="https://nownownow.com/about"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
now page
|
||||
</Link>
|
||||
, and if you have your own site, <Link
|
||||
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
|
||||
href="https://nownownow.com/about"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
you should make one, too
|
||||
</Link>
|
||||
.)
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
You'll see that the top section is largely static, with text styled using Tailwind and associated icons from the [Hero Icons](https://heroicons.com) package. We're also exporting an instance of `getStaticProps` that's revalidated every hour and makes a call to a method exposed from my `lib` directory called `loadNowData`. `loadNowData` takes a comma delimited string as an argument to indicate which properties I want returned in the composed object from that method[^1]. The method looks like this[^2]:
|
||||
|
||||
```typescript
|
||||
import { extract } from '@extractus/feed-extractor'
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
import { Albums, Artists, Status, TransformedRss } from '@/types/api'
|
||||
import { Tracks } from '@/types/api/tracks'
|
||||
|
||||
export default async function loadNowData(endpoints?: string) {
|
||||
const selectedEndpoints = endpoints?.split(',') || null
|
||||
const TV_KEY = process.env.API_KEY_TRAKT
|
||||
const MUSIC_KEY = process.env.API_KEY_LASTFM
|
||||
const env = process.env.NODE_ENV
|
||||
let host = siteMetadata.siteUrl
|
||||
if (env === 'development') host = 'http://localhost:3000'
|
||||
|
||||
let statusJson = null
|
||||
let artistsJson = null
|
||||
let albumsJson = null
|
||||
let booksJson = null
|
||||
let moviesJson = null
|
||||
let tvJson = null
|
||||
let currentTrackJson = null
|
||||
|
||||
// status
|
||||
if ((endpoints && selectedEndpoints.includes('status')) || !endpoints) {
|
||||
const statusUrl = 'https://api.omg.lol/address/cory/statuses/'
|
||||
statusJson = await fetch(statusUrl)
|
||||
.then((response) => response.json())
|
||||
.catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
}
|
||||
|
||||
// artists
|
||||
if ((endpoints && selectedEndpoints.includes('artists')) || !endpoints) {
|
||||
const artistsUrl = `http://ws.audioscrobbler.com/2.0/?method=user.gettopartists&user=cdme_&api_key=${MUSIC_KEY}&limit=8&format=json&period=7day`
|
||||
artistsJson = await fetch(artistsUrl)
|
||||
.then((response) => response.json())
|
||||
.catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
}
|
||||
|
||||
// albums
|
||||
if ((endpoints && selectedEndpoints.includes('albums')) || !endpoints) {
|
||||
const albumsUrl = `http://ws.audioscrobbler.com/2.0/?method=user.gettopalbums&user=cdme_&api_key=${MUSIC_KEY}&limit=8&format=json&period=7day`
|
||||
albumsJson = await fetch(albumsUrl)
|
||||
.then((response) => response.json())
|
||||
.catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
}
|
||||
|
||||
// books
|
||||
if ((endpoints && selectedEndpoints.includes('books')) || !endpoints) {
|
||||
const booksUrl = `${host}/feeds/books`
|
||||
booksJson = await extract(booksUrl).catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
}
|
||||
|
||||
// movies
|
||||
if ((endpoints && selectedEndpoints.includes('movies')) || !endpoints) {
|
||||
const moviesUrl = `${host}/feeds/movies`
|
||||
moviesJson = await extract(moviesUrl).catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
moviesJson.entries = moviesJson.entries.splice(0, 5)
|
||||
}
|
||||
|
||||
// tv
|
||||
if ((endpoints && selectedEndpoints.includes('tv')) || !endpoints) {
|
||||
const tvUrl = `${host}/feeds/tv?slurm=${TV_KEY}`
|
||||
tvJson = await extract(tvUrl).catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
tvJson.entries = tvJson.entries.splice(0, 5)
|
||||
}
|
||||
|
||||
// current track
|
||||
if ((endpoints && selectedEndpoints.includes('currentTrack')) || !endpoints) {
|
||||
const currentTrackUrl = `http://ws.audioscrobbler.com/2.0/?method=user.getrecenttracks&user=cdme_&api_key=${MUSIC_KEY}&limit=1&format=json&period=7day`
|
||||
currentTrackJson = await fetch(currentTrackUrl)
|
||||
.then((response) => response.json())
|
||||
.catch((error) => {
|
||||
console.log(error)
|
||||
return {}
|
||||
})
|
||||
}
|
||||
|
||||
const res: {
|
||||
status?: Status
|
||||
artists?: Artists
|
||||
albums?: Albums
|
||||
books?: TransformedRss
|
||||
movies?: TransformedRss
|
||||
tv?: TransformedRss
|
||||
currentTrack?: Tracks
|
||||
} = {}
|
||||
if (statusJson) res.status = statusJson.response.statuses.splice(0, 1)[0]
|
||||
if (artistsJson) res.artists = artistsJson?.topartists.artist
|
||||
if (albumsJson) res.albums = albumsJson?.topalbums.album
|
||||
if (booksJson) res.books = booksJson?.entries
|
||||
if (moviesJson) res.movies = moviesJson?.entries
|
||||
if (tvJson) res.tv = tvJson?.entries
|
||||
if (currentTrackJson) res.currentTrack = currentTrackJson?.recenttracks?.track?.[0]
|
||||
|
||||
// unified response
|
||||
return res
|
||||
}
|
||||
```
|
||||
|
||||
The individual media components of the now page are simple and presentational, for example, `Albums.tsx`:
|
||||
|
||||
```jsx
|
||||
import Cover from '@/components/media/display/Cover'
|
||||
import { Spin } from '@/components/Loading'
|
||||
import { Album } from '@/types/api'
|
||||
|
||||
const Albums = (props: { albums: Album[] }) => {
|
||||
const { albums } = props
|
||||
|
||||
if (!albums) return <Spin className="my-12 flex justify-center" />
|
||||
|
||||
return (
|
||||
<>
|
||||
<h3 className="pt-4 pb-4 text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-2xl sm:leading-10 md:text-4xl md:leading-14">
|
||||
Listening: albums
|
||||
</h3>
|
||||
<div className="grid grid-cols-2 gap-2 md:grid-cols-4">
|
||||
{albums?.map((album) => (
|
||||
<Cover key={album.mbid} media={album} type="album" />
|
||||
))}
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
export default Albums
|
||||
```
|
||||
|
||||
This component and `Artists.tsx` leverage `Cover.tsx`, which renders music related elements:
|
||||
|
||||
```tsx
|
||||
import { Media } from '@/types/api'
|
||||
import ImageWithFallback from '@/components/ImageWithFallback'
|
||||
import Link from 'next/link'
|
||||
import { ALBUM_DENYLIST } from '@/utils/constants'
|
||||
|
||||
const Cover = (props: { media: Media; type: 'artist' | 'album' }) => {
|
||||
const { media, type } = props
|
||||
const image = (media: Media) => {
|
||||
let img = ''
|
||||
if (type === 'album')
|
||||
img = !ALBUM_DENYLIST.includes(media.name.replace(/\s+/g, '-').toLowerCase())
|
||||
? media.image[media.image.length - 1]['#text']
|
||||
: `/media/artists/${media.name.replace(/\s+/g, '-').toLowerCase()}.jpg`
|
||||
if (type === 'artist')
|
||||
img = `/media/artists/${media.name.replace(/\s+/g, '-').toLowerCase()}.jpg`
|
||||
return img
|
||||
}
|
||||
|
||||
return (
|
||||
<Link
|
||||
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
|
||||
href={media.url}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
title={media.name}
|
||||
>
|
||||
<div className="relative">
|
||||
<div className="absolute left-0 top-0 h-full w-full rounded-lg border border-primary-500 bg-cover-gradient dark:border-gray-500"></div>
|
||||
<div className="absolute left-1 bottom-2 drop-shadow-md">
|
||||
<div className="px-1 text-xs font-bold text-white">{media.name}</div>
|
||||
<div className="px-1 text-xs text-white">
|
||||
{type === 'album' ? media.artist.name : `${media.playcount} plays`}
|
||||
</div>
|
||||
</div>
|
||||
<ImageWithFallback
|
||||
src={image(media)}
|
||||
alt={media.name}
|
||||
className="rounded-lg"
|
||||
width="350"
|
||||
height="350"
|
||||
/>
|
||||
</div>
|
||||
</Link>
|
||||
)
|
||||
}
|
||||
|
||||
export default Cover
|
||||
```
|
||||
|
||||
All of the components for this page [can be viewed on GitHub](https://github.com/cdransf/coryd.dev/tree/main/components/media). Each one consumes an object from the `loadNowData` object and renders it to the page. The page is also periodically revalidated via an api route that simply calls this same method:
|
||||
|
||||
```ts
|
||||
import loadNowData from '@/lib/now'
|
||||
|
||||
export default async function handler(req, res) {
|
||||
res.setHeader('Cache-Control', 's-maxage=3600, stale-while-revalidate')
|
||||
|
||||
const endpoints = req.query.endpoints
|
||||
const response = await loadNowData(endpoints)
|
||||
res.json(response)
|
||||
}
|
||||
```
|
||||
|
||||
And, with all of that in place, we have a lightly trafficked page that updates itself (with a few exceptions) as I go about my habits of using Last.fm, Trakt, Letterboxd, Oku and so forth.
|
||||
|
||||
[^1]: I know about GraphQL, but we're just going to deal with plain old fetch calls here.
|
||||
[^2]: It's also leveraged on the index view of my site to fetch my status, currently playing track and the books I'm currently reading.
|
210
src/posts/2023/client-side-webmentions-in-nextjs.md
Normal file
210
src/posts/2023/client-side-webmentions-in-nextjs.md
Normal file
|
@ -0,0 +1,210 @@
|
|||
---
|
||||
title: 'Adding client side webmentions to my Next.js blog'
|
||||
date: 2023-02-18
|
||||
draft: false
|
||||
tags: ['nextjs', 'react', 'web development', 'webmentions', 'indie web']
|
||||
---
|
||||
|
||||
The latest iteration of my website is built on [Next.js](https://nextjs.org), specifically [Timothy Lin](https://github.com/timlrx)'s wonderful [Tailwind/Next.js starter blog.](https://github.com/timlrx/tailwind-nextjs-starter-blog).<!-- excerpt --> I've modified it quite a bit, altering the color scheme, dropping components like analytics, comments and a few others while also building out some new pages (like my [now page](https://coryd.dev/now)). As part of this process I wanted to add support for webmentions to the template, integrating mentions from Mastodon, Medium.com and other available sources.
|
||||
|
||||
To kick this off you'll need to log in and establish an account with [webmention.io](https://webmention.io) and [Bridgy](https://brid.gy). The former provides you with a pair of meta tags that collect webmentions, the latter connects your site to social media[^1]
|
||||
|
||||
Once you've added the appropriate tags from webmention.io, connected your desired accounts to Bridgy and received some mentions on these sites, you should be able to access said mentions via their API. For my purposes (and yours should you choose to take the same approach), this looks like the following Next.js API route:
|
||||
|
||||
```typescript
|
||||
import loadWebmentions from '@/lib/webmentions'
|
||||
|
||||
export default async function handler(req, res) {
|
||||
const target = req.query.target
|
||||
const response = await loadWebmentions(target)
|
||||
res.json(response)
|
||||
}
|
||||
```
|
||||
|
||||
You can see my mentions at the live route [here](https://coryd.dev/api/webmentions).
|
||||
|
||||
I've elected to render mentions of my posts (boosts, in Mastodon's parlance), likes and comments. For boosts, I'm rendering the count, for likes I render the avatar and for mentions I render the comment in full. The component that handles this looks like the following:
|
||||
|
||||
```jsx
|
||||
import siteMetadata from '@/data/siteMetadata'
|
||||
import { Heart, Rocket } from '@/components/icons'
|
||||
import { Spin } from '@/components/Loading'
|
||||
import { useRouter } from 'next/router'
|
||||
import { useJson } from '@/hooks/useJson'
|
||||
import Link from 'next/link'
|
||||
import Image from 'next/image'
|
||||
import { formatDate } from '@/utils/formatters'
|
||||
|
||||
const WebmentionsCore = () => {
|
||||
const { asPath } = useRouter()
|
||||
const { response, error } = useJson(`/api/webmentions?target=${siteMetadata.siteUrl}${asPath}`)
|
||||
const webmentions = response?.children
|
||||
const hasLikes =
|
||||
webmentions?.filter((mention) => mention['wm-property'] === 'like-of').length > 0
|
||||
const hasComments =
|
||||
webmentions?.filter((mention) => mention['wm-property'] === 'in-reply-to').length > 0
|
||||
const boostsCount = webmentions?.filter(
|
||||
(mention) =>
|
||||
mention['wm-property'] === 'repost-of' || mention['wm-property'] === 'mention-of'
|
||||
).length
|
||||
const hasBoosts = boostsCount > 0
|
||||
const hasMention = hasLikes || hasComments || hasBoosts
|
||||
|
||||
if (error) return null
|
||||
if (!response) return <Spin className="my-2 flex justify-center" />
|
||||
|
||||
const Boosts = () => {
|
||||
return (
|
||||
<div className="flex flex-row items-center">
|
||||
<div className="mr-2 h-5 w-5">
|
||||
<Rocket />
|
||||
</div>
|
||||
{` `}
|
||||
<span className="text-sm">{boostsCount}</span>
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
const Likes = () => (
|
||||
<>
|
||||
<div className="flex flex-row items-center">
|
||||
<div className="mr-2 h-5 w-5">
|
||||
<Heart />
|
||||
</div>
|
||||
<ul className="ml-2 flex flex-row">
|
||||
{webmentions?.map((mention) => {
|
||||
if (mention['wm-property'] === 'like-of')
|
||||
return (
|
||||
<li key={mention['wm-id']} className="-ml-2">
|
||||
<Link
|
||||
href={mention.url}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
<Image
|
||||
className="h-10 w-10 rounded-full border border-primary-500 dark:border-gray-500"
|
||||
src={mention.author.photo}
|
||||
alt={mention.author.name}
|
||||
width="40"
|
||||
height="40"
|
||||
/>
|
||||
</Link>
|
||||
</li>
|
||||
)
|
||||
})}
|
||||
</ul>
|
||||
</div>
|
||||
</>
|
||||
)
|
||||
|
||||
const Comments = () => {
|
||||
return (
|
||||
<>
|
||||
{webmentions?.map((mention) => {
|
||||
if (mention['wm-property'] === 'in-reply-to') {
|
||||
return (
|
||||
<Link
|
||||
className="border-bottom flex flex-row items-center border-gray-100 pb-4"
|
||||
key={mention['wm-id']}
|
||||
href={mention.url}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
<Image
|
||||
className="h-12 w-12 rounded-full border border-primary-500 dark:border-gray-500"
|
||||
src={mention.author.photo}
|
||||
alt={mention.author.name}
|
||||
width="48"
|
||||
height="48"
|
||||
/>
|
||||
<div className="ml-3">
|
||||
<p className="text-sm">{mention.content?.text}</p>
|
||||
<p className="mt-1 text-xs">{formatDate(mention.published)}</p>
|
||||
</div>
|
||||
</Link>
|
||||
)
|
||||
}
|
||||
})}
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
{hasMention ? (
|
||||
<div className="text-gray-500 dark:text-gray-100">
|
||||
<h4 className="pt-3 text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 md:text-2xl md:leading-10 ">
|
||||
Webmentions
|
||||
</h4>
|
||||
{hasBoosts ? (
|
||||
<div className="pt-2 pb-4">
|
||||
<Boosts />
|
||||
</div>
|
||||
) : null}
|
||||
{hasLikes ? (
|
||||
<div className="pt-2 pb-4">
|
||||
<Likes />
|
||||
</div>
|
||||
) : null}
|
||||
{hasComments ? (
|
||||
<div className="pt-2 pb-4">
|
||||
<Comments />
|
||||
</div>
|
||||
) : null}
|
||||
</div>
|
||||
) : null}
|
||||
</>
|
||||
)
|
||||
}
|
||||
|
||||
export default WebmentionsCore
|
||||
```
|
||||
|
||||
We derive the post URL from the fixed site URL in my site metadata, the URI from Next.js' router, concatenate them and pass them as the API path to my `useJson` hook, which wraps `useSWR`[^2]:
|
||||
|
||||
```typescript
|
||||
import { useEffect, useState } from 'react'
|
||||
import useSWR from 'swr'
|
||||
|
||||
export const useJson = (url: string, props?: any) => {
|
||||
const [response, setResponse] = useState<any>({})
|
||||
|
||||
const fetcher = (url: string) =>
|
||||
fetch(url)
|
||||
.then((res) => res.json())
|
||||
.catch()
|
||||
const { data, error } = useSWR(url, fetcher, { fallbackData: props, refreshInterval: 30000 })
|
||||
|
||||
useEffect(() => {
|
||||
setResponse(data)
|
||||
}, [data, setResponse])
|
||||
|
||||
return {
|
||||
response,
|
||||
error,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `target` param narrows the returned mentions to those pertinent to the current post. Once we've received the appropriate response from the service, we evaluate the data to determine what types of mentions we have, construct JSX components to display them and conditionally render them based on the presence of the appropriate mention data.
|
||||
|
||||
The `WebmentionsCore` component is dynamically loaded into each post using the following parent component:
|
||||
|
||||
```jsx
|
||||
import dynamic from 'next/dynamic'
|
||||
import { Spin } from '@/components/Loading'
|
||||
|
||||
const Webmentions = dynamic(() => import('@/components/webmentions/WebmentionsCore'), {
|
||||
ssr: false,
|
||||
loading: () => <Spin className="my-2 flex justify-center" />,
|
||||
})
|
||||
|
||||
export default Webmentions
|
||||
```
|
||||
|
||||
The final display looks like this:
|
||||
|
||||
<img src="https://files.coryd.dev/v/NG8lHj24OsJilx7QuxWO+" alt="Example webmentions" styles="width:100%;height:auto;margin:.5em 0" />
|
||||
|
||||
[^1]: For my purposes, social media is GitHub, Mastodon and Medium. I've used the rest at various points and no longer have an interest in them for myriad reasons.
|
||||
[^2]: I've discussed this all a bit more in [this post](https://coryd.dev/blog/simple-api-fetch-hooks-with-swr).
|
|
@ -0,0 +1,307 @@
|
|||
---
|
||||
date: '2023-02-17'
|
||||
title: 'Workflows: handling inbound email on Fastmail with regular expressions (now featuring ChatGPT)'
|
||||
draft: false
|
||||
tags: ['email', 'fastmail', 'regular expressions', 'workflows', 'chatgpt']
|
||||
---
|
||||
|
||||
I've been using Fastmail for years now and have explored a number of different approaches to handling mail. I've approached it by creating rules targeting lists of top level domains, I've gone with no rules at all and a heavy-handed approach to unsubscribing from messages (operating under the idea that _everything_ warrants being seen and triaged) and I've even used HEY [^1].<!-- excerpt -->
|
||||
|
||||
For now, I've approached filtering my mail by applying regular expressions to reasonably broad categories of incoming mail[^2]. My thinking with this approach is that will scale better over the long term by applying heuristics to common phrases and patterns in incoming mail without the need to apply rules to senders on a per address or domain basis.
|
||||
|
||||
<img src="https://files.coryd.dev/j/Jd6NQcAVD3oU4gkgZMpD+" alt="A diagram of my Fastmail workflow" styles="width:100%;height:auto;margin:.5em 0" />
|
||||
|
||||
## Alias-specific rules
|
||||
|
||||
I have four aliases that I regularly provide to different services. One is for newsletters and routes them to [Readwise's Reader app](https://readwise.io/read), another routes directly to my saved articles in the same app, another routes different messages to my [Things](https://culturedcode.com/things/) inbox and a final one serves as the recovery email on my grandfather's accounts (in the event anything goes awry).
|
||||
|
||||
These work by checking that the `To/CC/BCC` matches the appropriate alias before filing them off to `Archive/Newsletters`, `Archive/Saves` or `Notifications`, respectively. These folders are configured to auto-purge their contents at regular intervals as they are typically consumed in the context of the application that they're forwarded to (and are only filed into folders for reference in the event something goes wrong in said applications).
|
||||
|
||||
## A quick failsafe
|
||||
|
||||
In the event I've failed to tune a regular expression properly or an actual person triggers a match I have a rule that is executed after the aforementioned alias-specific rules that stops all rule evaluations for _any_ address in my contacts.
|
||||
|
||||
**Update:** I've run every regular expression and glob pattern I apply to my messages through ChatGPT to see if it could simplify, combine and otherwise improve them (namely reducing false positives). This has worked quite well (outside of the time required to coax ChatGPT to the best possible answer). Further, my deliveries rule that forwards to Parcel now also requires both a subject and body match before forwarding.
|
||||
|
||||
[I also have a rule containing regular expressions that also skips evaluations for login pin codes, meeting/appointment reminders and common security notices](https://pastes.coryd.dev/mail-regexes-alerts/markup).
|
||||
|
||||
```json
|
||||
{
|
||||
"conditions": [
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "(?i)\\b(PIN|Verify|Verification|Confirm|One-Time|Single(-|\\s)Use)\\b.*?(passcode|number|code.*$)",
|
||||
"lookIn": "subject"
|
||||
},
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "(?i)^.*upcoming (appointment|visit).*",
|
||||
"lookIn": "subject"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)^.*new.*(sign(in|-in|ed)|(log(in|-in|ged)))",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)^.*(meeting|visit|appointment|event).*\\b(reminder|notification)",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)^.*verify.*(device|email|phone)",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "(?i)^.*Apple.*(ID was used to sign in)",
|
||||
"lookIn": "subject"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)^.*(computer|phone|device).*(added)",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "(?i)^2FA.*(turned on)",
|
||||
"lookIn": "subject"
|
||||
},
|
||||
{
|
||||
"lookIn": "subject",
|
||||
"lookFor": "(?i)^.*confirm.*(you)",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)^.*you.*((log|sign)\\s?-?\\s?in).*$",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookHow": "is",
|
||||
"lookFor": "notifications@savvycal.com",
|
||||
"lookIn": "fromEmail"
|
||||
},
|
||||
{
|
||||
"lookIn": "subject",
|
||||
"lookFor": "\\b(?:RSVP|invitation|event|attend)\\b",
|
||||
"lookHow": "regexp"
|
||||
}
|
||||
```
|
||||
|
||||
## Mapping categories as folders
|
||||
|
||||
I've tailored these rules to align with folders on a per topic basis. I have a broad `Financial` folder for things like receipts, bank statements and bills. That folder contains a few granular subfolders like `Deliveries`, `Media`, `Medical`, `Promotions` and so forth. All multi-step rules are set to filter messages when `any` of the tabled criteria matches.
|
||||
|
||||
The top level `Financial` rule [looks like this](https://pastes.coryd.dev/mail-regexes-financial/markup).
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookFor": "([Ee]quifax.*$|[Ee]xperian.*$|[Tt]ransunion.*$|[Aa]mazon[Kk]ids.*$|[Vv]isa[Pp]repaid[Pp]rocessing.*$|americanexpress.*$|paddle.*$|instacart.*$|^.*discover.*$|^.*aaa.*$)",
|
||||
"lookIn": "fromEmail",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookFor": "([Gg]andi.*$|[Hh]over.*$|[Tt]ucows.*$|[Gg]o[Dd]addy.*$|[Nn]ame[Cc]heap.*$|[Vv]enmo.*$|[Pp]ay[Pp]al.*$|[Aa][Cc][Ii]payonline.*$|[Uu]se[Ff]athom.*$)",
|
||||
"lookIn": "fromEmail",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "(?i)you(?:r)?[\\s-]*(?:pre[\\s-]?order|pre[\\s-]?order(?:ed))",
|
||||
"lookIn": "body"
|
||||
},
|
||||
{
|
||||
"lookIn": "toCcBccName",
|
||||
"lookFor": "*[Aa][Pp][Pp][Ll][Ee] [Cc][Aa][Rr][Dd]*[Ss][Uu][Pp][Pp][Oo][Rr][Tt]*",
|
||||
"lookHow": "glob"
|
||||
},
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "subject",
|
||||
"lookFor": "\\b(?i)(receipt|bill|invoice|transaction|statement|payment|order|subscription|authorized|booking|renew(al|ing)?|expir(e|ed|ing)?|deposit|withdrawl|purchased)\\b.*"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)\\b(receipt|bill|invoice|transaction|statement|payment|order|subscription|authorized|booking|renew(al|ing)?|expir(e|ed|ing)?|deposit|withdrawl|purchased|(itunes|apple) store|credit (score|report)|manage (account|loan))\\b.*",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "(?i)\\b(gift (card|certificate)|zelle|new plan|autopay|reward certificate)\\b.*",
|
||||
"lookIn": "subject"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
`Deliveries` follow a similar pattern with rule sets intended to capture messages with package tracking information or other details. I kickstarted this rule by, naturally, referencing [this answer from StackOverflow](https://stackoverflow.com/a/5024011).
|
||||
|
||||
All of the regular expressions contained in this answer are matched against the `Body` of inbound messages before being forwarded to [Parcel Email](https://parcelapp.net/help/parcel-email.html)[^3]. These rules are supplemented by a few edge case rules targeted at the `Subject` field:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "body",
|
||||
"lookFor": "\\b(?:1Z[\\dA-Z]{16}|[\\d]{20}|[\\d]{22}|[\\d]{26}|[\\d]{15}|E\\D{1}[\\d]{9}|[\\d]{9}[ ]?[\\d]{4})\\b"
|
||||
},
|
||||
{
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "^.*[Aa] shipment (from|to).*([Ww]as|[Hh]as|is on the way).*?$"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
Finally, I have a rule intended to catch anything that falls through the cracks[^4]:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookFor": "usps|fedex|narvar|shipment-tracking|getconvey",
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "fromEmail"
|
||||
},
|
||||
{
|
||||
"lookFor": "?(ed*x delivery manager|*ed*x.com|tracking*updates*)",
|
||||
"lookHow": "glob",
|
||||
"lookIn": "fromName"
|
||||
},
|
||||
{
|
||||
"lookFor": "(?i)^.*package (has been?|was) delivered.*$",
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "subject"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
My `medical` and `media` rules follow a basic pattern that could be approximated using a per-line sender TLD match[^5]:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookFor": "^(?i:Disneyplus.*$|Netflix.*$|^.*hulu.*$|HBOmax.*$|MoviesAnywhere.*$|iTunes.*$|7digital.*$|Bandcamp.*$|Roku.*$|Plex.*$|Peacock.*$)",
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "fromEmail"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
I'd recommend paring this down to match whichever `media` and `medical` providers email you.
|
||||
|
||||
This pattern of filtering and filing continues for several additional categories.
|
||||
|
||||
**Financial/Tickets**
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookFor": "\\b(?i)(concert|event|show|performance|ticket|admission|venue|registration)\\b",
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "subject"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
**Travel (non-forwarding)**
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "\\b(?i)(hotel|reservation|booking|dining|restaurant|travel)(s)?( |-)?(confirmation|reservations?|bookings?|details)\\b",
|
||||
"lookIn": "subject"
|
||||
},
|
||||
{
|
||||
"lookFor": "\\b(?i)(uber|lyft|rideshare)(s)?( |-)?(receipt|confirmation|ride summary|your ride with)\\b",
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "subject"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
**Travel (forwarding)**
|
||||
|
||||
These are designed to capture confirmations sent by Southwest and are sent off to [Flighty](https://www.flightyapp.com) before being sorted.
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "\\b(?i)(flight|confirmation|you're going to).*\\b(reservation|on)\\b"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
**Annoying customer support follow-ups**
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookHow": "glob",
|
||||
"lookFor": "*customer*?(are|uccess|upport)",
|
||||
"lookIn": "fromName"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
**[Promotional messages (that you haven't unsubscribed from)](https://pastes.coryd.dev/mail-regexes-promotions/markup)**
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookHow": "regexp",
|
||||
"lookIn": "fromEmail",
|
||||
"lookFor": "(^.*store-news.*$|^.*axxess.*$)(\\b.*?|$)"
|
||||
},
|
||||
{
|
||||
"lookFor": "^(?=.*\\b(?i)(final offer|limited time|last chance|black friday|cyber monday|holiday|christmas|free shipping|send (gift|present))\\b).*\\b(?i)(discount|save|\\d+% off|free)\\b",
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookIn": "body",
|
||||
"lookFor": "\\b\\d{1,2}(?:\\.\\d+)?% off\\b",
|
||||
"lookHow": "regexp"
|
||||
},
|
||||
{
|
||||
"lookIn": "subject",
|
||||
"lookFor": "\\b(?:new|updated|special|limited-time)\\s+(?:offers|deals|discounts|promotions|sales)\\b",
|
||||
"lookHow": "regexp"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
**Social networking messages**
|
||||
|
||||
These I've left as a simple list wherein `any` included top level domain is filed away as I don't belong to many social networks and they change fairly infrequently.
|
||||
|
||||
**DMARC notifications (depending on how you have your policy record configured)**
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
"lookIn": "subject",
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "((^.*dmarc.*$)(\\b.*?|$))"
|
||||
},
|
||||
{
|
||||
"lookIn": "fromEmail",
|
||||
"lookHow": "regexp",
|
||||
"lookFor": "((^.*dmarc.*$)(\\b.*?|$))"
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
That covers _most_ of what I use to manage my mail (outside of anything particularly personal). I fully expect the regular expressions I'm using could stand to be refined and I plan on continuing to do just that. But, with that said, things have worked better than I expected so far and false positives/miscategorizations have been infrequent.
|
||||
|
||||
If you have any questions or suggestions I'm all ears. Feel free to [email me](mailto:hi@coryd.dev) or ping me on [Mastodon]().
|
||||
|
||||
[^1]: Before, well, _all that_.
|
||||
[^2]: Fastmail has some helpful tips on regular expression rules [here](https://www.fastmail.help/hc/en-us/articles/360060591193-Rules-using-regular-expressions)
|
||||
[^3]: Fun fact, this is, apparently, no longer being actively developed — presumably because email, as we all know, is an absolute pleasure to parse and deal with.
|
||||
[^4]: This rule doesn't forward over to Parcel as it typically captures secondary notices that either don't contain or duplicate the original tracking info.
|
||||
[^5]: I know, I called this inefficient earlier.
|
Reference in a new issue