import posts, fix styles

This commit is contained in:
Cory Dransfeldt 2023-03-11 20:20:57 -08:00
parent 12584cf706
commit 9e00d9c09e
No known key found for this signature in database
32 changed files with 2665 additions and 293 deletions

View file

@ -1,7 +1,7 @@
<!doctype html>
<html lang="en">
<head>
<title>{{ title }}</title>
<title>{{ title }} • {{site.title}}</title>
<meta charset="utf-8">
<meta name='viewport' content='width=device-width'>
<meta property="og:title" content="{{ title }}" />
@ -29,7 +29,7 @@
}
</script>
</head>
<body class="dark:text-white dark:bg-gray-900 font-sans text-gray-800 dark:text-gray-50">
<body class="dark:text-white dark:bg-gray-900 font-sans text-gray-800">
{{ content }}
<script>
document.getElementById("toggleDarkMode").addEventListener("click", function() {

View file

@ -7,21 +7,23 @@
<button class="py-2 pr-4 cursor-not-allowed disabled:opacity-50" disabled>Previous</button>
{% endif %}
{% for pageEntry in pagination.pages %}
{% if page.url == pagination.hrefs[forloop.index0] %}
<a href="{{ pagination.hrefs[forloop.index0] }}" aria-current="page">
<button class="w-8 h-8 rounded-full text-white dark:text-gray-900 bg-primary-400 hover:bg-primary-500 dark:hover:bg-primary-300">
{{ forloop.index }}
</button>
</a>
{% else %}
<a href="{{ pagination.hrefs[forloop.index0] }}">
<button class="py-2 px-4">
{{ forloop.index }}
</button>
</a>
{% endif %}
{% endfor %}
<div class="flex flex-row items-center">
{% for pageEntry in pagination.pages %}
{% if page.url == pagination.hrefs[forloop.index0] %}
<a href="{{ pagination.hrefs[forloop.index0] }}" aria-current="page">
<button class="w-8 h-8 rounded-full text-white dark:text-gray-900 bg-primary-400 hover:bg-primary-500 dark:hover:bg-primary-300">
{{ forloop.index }}
</button>
</a>
{% else %}
<a href="{{ pagination.hrefs[forloop.index0] }}">
<button class="py-2 px-4">
{{ forloop.index }}
</button>
</a>
{% endif %}
{% endfor %}
</div>
{% if pagination.href.next %}

View file

@ -5,17 +5,19 @@ layout: main
{% include "header.liquid" %}
<h2 class="text-xl md:text-2xl font-black leading-tight dark:text-gray-200 pt-12">{{title}}</h2>
<div class="mt-2 text-sm mb-4">
<em>{{ date | date: "%m.%d.%Y" }}</em> • {% for tag in tags %}
{% if tag != "posts" %}
<a href="/tags/{{ tag }}" class="no-underline">
<span class="post-tag">{{ tag }}</span>
</a>
{% endif %}
{% endfor %}
<div class="h-14 flex items-center text-sm">
<span>{{ date | date: "%m.%d.%Y" }}</span>
<span class="mx-1">•</span>
<span class="inline-flex flex-row">
{% for tag in tags %} {% if tag != "posts" %}
<a href="/tags/{{ tag }}" class="font-normal no-underline">
<span class="post-tag">{{ tag }}</span>
</a>
{% endif %} {% endfor %}
</span>
</div>
<div class="prose dark:prose-invert hover:prose-a:text-blue-500 max-w-full">
<div class="prose dark:prose-invert hover:prose-a:text-blue-500 max-w-full text-gray-800 dark:text-white">
{{ content }}
</div>

View file

@ -1,6 +1,7 @@
---
layout: default
title: Blog
templateEngineOverride: liquid,md
pagination:
data: collections.posts
size: 10
@ -9,7 +10,7 @@ pagination:
---
{% for post in pagination.items %} {% if post.data.published %}
<div class="mb-8 border-b border-gray-200 pb-8 dark:border-gray-700">
<div class="mb-8 border-b border-gray-200 pb-8 text-gray-800 dark:border-gray-700 dark:text-white">
<a class="no-underline" href="{{ post.url }}"
><h2
class="m-0 text-xl font-black leading-tight tracking-normal dark:text-gray-200 md:text-2xl"
@ -17,15 +18,18 @@ pagination:
{{ post.data.title }}
</h2>
</a>
<div class="mt-2 text-sm">
<em>{{ post.date | date: "%m.%d.%Y" }}</em> • {% for tag in post.data.tags %} {% if tag !=
"posts" %}
<a href="/tags/{{ tag }}" class="font-normal no-underline">
<div class="post-tag">{{ tag }}</div>
</a>
{% endif %} {% endfor %}
<div class="flex h-14 items-center text-sm">
<span>{{ post.date | date: "%m.%d.%Y" }}</span>
<span class="mx-1"></span>
<span class="inline-flex flex-row">
{% for tag in post.data.tags %} {% if tag != "posts" %}
<a href="/tags/{{ tag }}" class="font-normal no-underline">
<span class="post-tag">{{ tag }}</span>
</a>
{% endif %} {% endfor %}
</span>
</div>
<p class="mt-4">{{ post.data.post_excerpt }}...</p>
<p class="mt-0">{{ post.data.post_excerpt }}</p>
<div class="mt-4 flex items-center justify-between">
<a class="flex-none font-normal no-underline" href="{{ post.url }}">Read more &rarr;</a>
</div>

View file

@ -1,124 +0,0 @@
---
title: Flutter
date: 2020-07-01
tags:
- android
- flutter
---
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
<!-- excerpt -->
# h1 Heading
## h2 Heading
### h3 Heading
#### h4 Heading
##### h5 Heading
###### h6 Heading
## Horizontal Rules
---
## Emphasis
**This is bold text**
**This is bold text**
_This is italic text_
_This is italic text_
~~Strikethrough~~
## Blockquotes
> Blockquotes can also be nested...
>
> > ...by using additional greater-than signs right next to each other...
> >
> > > ...or with spaces between arrows.
## Lists
Unordered
- Create a list by starting a line with `+`, `-`, or `*`
- Sub-lists are made by indenting 2 spaces:
- Marker character change forces new list start:
- Ac tristique libero volutpat at
* Facilisis in pretium nisl aliquet
- Nulla volutpat aliquam velit
- Very easy!
Ordered
1. Lorem ipsum dolor sit amet
2. Consectetur adipiscing elit
3. Integer molestie lorem at massa
4. You can use sequential numbers...
5. ...or keep all the numbers as `1.`
Start numbering with offset:
57. foo
1. bar
## Code
Inline `code`
Indented code
// Some comments
line 1 of code
line 2 of code
line 3 of code
Block code "fences"
```
Sample text here...
```
Syntax highlighting
```js
var foo = function (bar) {
return bar++
}
console.log(foo(5))
```
## Tables
| Option | Description |
| ------ | ------------------------------------------------------------------------- |
| data | path to data files to supply the data that will be passed into templates. |
| engine | engine to be used for processing templates. Handlebars is the default. |
| ext | extension to be used for dest files. |
Right aligned columns
| Option | Description |
| -----: | ------------------------------------------------------------------------: |
| data | path to data files to supply the data that will be passed into templates. |
| engine | engine to be used for processing templates. Handlebars is the default. |
| ext | extension to be used for dest files. |
## Links
[link text](http://dev.nodeca.com)
[link with title](http://nodeca.github.io/pica/demo/ 'title text!')
Autoconverted link https://github.com/nodeca/pica (enable linkify to see)

View file

@ -1,125 +0,0 @@
---
title: Kotlin
date: 2020-07-01
# published: false
tags:
- kotlin
- android
---
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum
<!-- excerpt -->
{% asset_img 'mailbox.jpg' 'mailbox' %}
# h1 Heading
## h2 Heading
### h3 Heading
#### h4 Heading
##### h5 Heading
###### h6 Heading
## Horizontal Rules
___
## Emphasis
**This is bold text**
__This is bold text__
*This is italic text*
_This is italic text_
~~Strikethrough~~
## Blockquotes
> Blockquotes can also be nested...
>> ...by using additional greater-than signs right next to each other...
> > > ...or with spaces between arrows.
## Lists
Unordered
+ Create a list by starting a line with `+`, `-`, or `*`
+ Sub-lists are made by indenting 2 spaces:
- Marker character change forces new list start:
* Ac tristique libero volutpat at
+ Facilisis in pretium nisl aliquet
- Nulla volutpat aliquam velit
+ Very easy!
Ordered
1. Lorem ipsum dolor sit amet
2. Consectetur adipiscing elit
3. Integer molestie lorem at massa
1. You can use sequential numbers...
1. ...or keep all the numbers as `1.`
Start numbering with offset:
57. foo
1. bar
## Code
Inline `code`
Indented code
// Some comments
line 1 of code
line 2 of code
line 3 of code
Block code "fences"
```
Sample text here...
```
Syntax highlighting
``` js
var foo = function (bar) {
return bar++;
};
console.log(foo(5));
```
## Tables
| Option | Description |
| ------ | ----------- |
| data | path to data files to supply the data that will be passed into templates. |
| engine | engine to be used for processing templates. Handlebars is the default. |
| ext | extension to be used for dest files. |
Right aligned columns
| Option | Description |
| ------:| -----------:|
| data | path to data files to supply the data that will be passed into templates. |
| engine | engine to be used for processing templates. Handlebars is the default. |
| ext | extension to be used for dest files. |
## Links
[link text](http://dev.nodeca.com)
[link with title](http://nodeca.github.io/pica/demo/ "title text!")
Autoconverted link https://github.com/nodeca/pica (enable linkify to see)

View file

@ -0,0 +1,33 @@
---
title: 2021 reading list
date: '2021-03-21'
draft: false
tags: ['reading']
summary: I've been working on making reading a habit again for the past few years (my streak in books is currently 383 days).
---
I've been working on making reading a habit again for the past few years (my streak in books is currently 383 days).<!-- excerpt --> Here's where I'm at for 2021 so far:
**Finished**
- [Kill Switch: The Rise of the Modern Senate and the Crippling of American Democracy](https://www.harvard.com/book/kill_switch_the_rise_of_the_modern_senate_and_the_crippling_of_american_dem/), by Adam Jentleson
- [Working in Public: The Making and Maintenance of Open Source Software](https://blas.com/working-in-public/), by Nadia Eghbal
- [Let My People Go Surfing](https://www.patagonia.com/product/let-my-people-go-surfing-revised-paperback-book/BK067.html), by Yvon Chouinard
- [The Responsible Company](https://www.patagonia.com/product/the-responsible-company-what-weve-learned-from-patagonias-first-forty-years-paperback-book/BK233.html), by Yvon Chouinard & Vincent Stanley
- [Dark Mirror](https://www.penguinrandomhouse.com/books/316047/dark-mirror-by-barton-gellman/), by Barton Gellmen
- [Get Together](https://gettogether.world/), by Bailey Richardson, Kevin Huynh & Kai Elmer Sotto
- [Zucked](https://www.penguinrandomhouse.com/books/598206/zucked-by-roger-mcnamee/), by Roger McNamee
- [Fentanyl, Inc.](https://groveatlantic.com/book/fentanyl-inc/), by Ben Weshoff
- [A Promised Land](https://obamabook.com/), by Barack Obama
**In progress**
- [This Is How They Tell Me the World Ends](https://www.bloomsbury.com/us/this-is-how-they-tell-me-the-world-ends-9781635576061/), by Nicole Perlroth
- [Revelation Space](http://www.alastairreynolds.com/release/revelation-space/), by Alastair Reynolds
**Next up**
- [JavaScript for Impatient Programmers](https://exploringjs.com/impatient-js/), by Dr. Axel Rauschmayer
- [Deep JavaScript: Theory and Techniques](https://exploringjs.com/deep-js/), by Dr. Axel Rauschmayer
- [Don't Think of an Elephant!](https://georgelakoff.com/books/dont_think_of_an_elephant_know_your_values_and_frame_the_debatethe_essential_guide_for_progressives-119190455949080/), by George Lakoff
- [The Assassination of Fred Hampton](https://www.amazon.com/Assassination-Fred-Hampton-Chicago-Murdered/dp/1569767092), by Jeffrey Haas

View file

@ -0,0 +1,50 @@
---
title: '2022 reading list'
date: '2022-04-03'
draft: false
tags: ['reading']
summary: "I'm still plugging away with my reading habit and my streak is now at 772 days."
---
I'm still plugging away with my reading habit and my streak is now at 772 days.<!-- excerpt --> Here's where I'm at for 2022 so far:
**Finished**
- [The Extended Mind by Annie Murphy Paul](https://oku.club/book/the-extended-mind-by-annie-murphy-paul-Mzlrf)
- [Drive by James S. A. Corey](https://oku.club/book/drive-by-james-s-a-corey-DXapB)
- [MBS by Ben Hubbard](https://oku.club/book/mbs-by-ben-hubbard-HTrlr)
- [Putins People by Catherine Belton](https://oku.club/book/putins-people-by-catherine-belton-cHBSw)
- [The Sins of Our Fathers by James S. A. Corey](https://oku.club/book/the-sins-of-our-fathers-by-james-s-a-corey-HKXjt)
- [The Complete Redux Book by Ilya Gelman and Boris Dinkevich](https://leanpub.com/redux-book)
- [Off the Edge by Kelly Weill](https://oku.club/book/off-the-edge-by-kelly-weill-SKujn)
- [The Cryptopians by Laura Shin](https://oku.club/book/the-cryptopians-by-laura-shin-S43ey)
- [The Intersectional Environmentalist by Leah Thomas](https://oku.club/book/the-intersectional-environmentalist-by-leah-thomas-3o8nH)
- [The Compatriots by Andrei Soldatov](https://oku.club/book/the-compatriots-by-andrei-soldatov-UMhCz)
- [The Wretched of the Earth by Frantz Fanon](https://oku.club/book/the-wretched-of-the-earth-by-frantz-fanon-8On3n)
- [Lords of Chaos by Michael Moynihan](https://oku.club/book/lords-of-chaos-by-michael-moynihan-TQeVA)
- [Going Clear by Lawrence Wright](https://oku.club/book/going-clear-by-lawrence-wright-ChtJe)
- [Blitzed by Norman Ohler](https://oku.club/book/blitzed-by-norman-ohler-CZnyf)
- [Paradise by Lizzie Johnson](https://oku.club/book/paradise-by-lizzie-johnson-BHfRA)
- [Pedagogy of the Oppressed by Paulo Freire](https://oku.club/book/pedagogy-of-the-oppressed-by-paulo-freire-nGgoW)
- [Missoula by Jon Krakauer](https://oku.club/book/missoula-by-jon-krakauer-ggUIz)
- [Free by Lea Ypi](https://oku.club/book/free-by-lea-ypi-k3V1u)
- [Reign of Terror by Spencer Ackerman](https://oku.club/book/reign-of-terror-by-spencer-ackerman-vNJMb)
- [Narconomics by Tom Wainwright](https://oku.club/book/narconomics-by-tom-wainwright-qRrxi)
- [Capitalist Realism by Mark Fisher](https://oku.club/book/capitalist-realism-by-mark-fisher-Lq4Gm)
- [An Ugly Truth by Sheera Frenkel](https://oku.club/book/an-ugly-truth-by-sheera-frenkel-RxLoN)
- [Sellout by Dan Ozzi](https://oku.club/book/sellout-by-dan-ozzi-wXvCV)
- [Will by Will Smith and Mark Manson](https://oku.club/book/will-by-will-manson-smith-mark-YfBE1)
**In progress**
- [Rotting Ways to Misery by Markus Makkonen](https://oku.club/book/rotting-ways-to-misery-by-markus-makkonen-MPt17)
- [Absolution Gap by Alastair Reynolds](https://oku.club/book/absolution-gap-by-alastair-reynolds-RHAFH)
- [Moneyland by Oliver Bullough, Marianne Palm](https://oku.club/book/moneyland-by-oliver-bullough-s9wvO)
**Next up**
- [Miles by Miles Davis](https://oku.club/book/miles-by-miles-davis-UG9m7)
- [The Nineties by Chuck Klosterman](https://oku.club/book/the-nineties-by-chuck-klosterman-QNgHC)
- [Old Man's War by John Scalzi](https://oku.club/book/old-mans-war-by-john-scalzi-H7UHv)
I've been listening to podcasts again as well, so I'll have to see how that impacts my pacing and reading.

View file

@ -0,0 +1,106 @@
---
title: A brief intro to git
date: '2021-06-07'
draft: false
tags: ['git', 'development']
summary: As a developer, a version control system is a critical part of your toolkit, no matter the size of the project or team you may find yourself working on.
---
As a developer, a version control system is a critical part of your toolkit, no matter the size of the project or team you may find yourself working on.<!-- excerpt -->
I first started learning to use git by applying it to my own projects and maintaining local repositories to track those projects. From there I moved on to hosting and storing my git repositories at [Bitbucket](https://bitbucket.org) while still working independently. My first experience with working alongside other developers in git came at my first full time development job on a small team (think _really_ small — two developers, myself included). I picked up the basics of branching, handling merges, developing different features in parallel and, ultimately, dealing with QA and production deployments that were sourced from various branches in our project repository.
I've expanded on my knowledge git in the jobs I've held since that first position and have used [svn](https://subversion.apache.org) pretty heavily as well (I don't mind it, but I don't love it — I'd argue git is the better choice for a number of reasons, its decentralized nature and flexibility being chief among them).
One of the many appeals of git is its flexibility and there are a wide range of commands that come with it. To get started, I'd suggest digging in with the following:
```bash
# initialize git
git init
# clone repo
git clone <repo url>
# view the state of the repo
git status
# this will stage all of your modified and/or untracked files for commit
git add -A
# this will stage only the files that you pass into the command as an argument, delimited by a space
git add <path to file(s)>
# this will commit all modified files and apply the commit message that follows it
git commit -am "<commit message>"
# this will commit only the files that you've staged and apply the message that follows it
git commit -m "<commit message>"
# amend the last commit message
git commit --amend
# this will fetch changes from the remote branch that you're currently on; this will require a merge if your local copy of the branch has diverged from the remote
git pull
# you can also specify arguments and branches with git pull, for example
git pull origin master
# this will checkout a different branch from the branch you're currently on
git checkout <branch name>
# alternatively you can revert the state of your current branch to match the head of that branch, or that of of an individual file
git checkout .
git checkout <path to file>
# check out a new branch, diverging from the current branch
git checkout -b <branch name>
# see available branches
git branch
# delete a branch locally
git branch -d <branch name>
# delete a branch remotely
git push origin --delete <branch name>
# merge a branch into the branch you're currently on
git merge <branch name>
# stash your current changes and return to the head of the branch you're on
git stash
# reapply your stashed changes
git stash apply
# reapply your topmost stashed changes and discard the change set
git stash pop
# show commit logs
git log
# show the reference log of all recent actions
git reflog
# fetch remote branches
git fetch
# throw away uncommitted changes and revert to the head of the branch (destructive command)
git reset --hard HEAD
# back branch up to previous commit
git checkout <commit hash value>
# revert a previous commit
git revert <git commit hash value>
```
Each of these commands has numerous options associated with it and allows for broad control over the flow and history of your project. There are a number of other options I'd suggest for learning more about git:
- [Github's git tutorial](https://try.github.io)
- [Pro Git book](https://git-scm.com/book)
- [Oh shit, git!](http://ohshitgit.com/)
- [Github guides](https://guides.github.com)
- [Git Real](https://courses.codeschool.com/courses/git-real)
- [Git documentation](https://git-scm.com/documentation)

View file

@ -0,0 +1,147 @@
---
title: 'Adding client-side rendered webmentions to my blog'
date: '2023-02-09'
draft: false
tags: ['webmentions', 'development', 'javascript']
summary: 'My blog is currently hosted on weblog.lol which allows for a simple and configurable weblog managed in git with posts formatted in markdown.'
---
My blog is currently hosted on weblog.lol which allows for a simple and configurable weblog managed in git with posts formatted in markdown. I wanted to add webmentions to my blog which, as of now, doesn't include a build step. To accomplish this, I've added an intermediary api endpoint to the same next.js app that powers my [/now](https://coryd.dev/now) page.<!-- excerpt -->
Robb has [a handy write up on adding webmentions to your website](https://rknight.me/adding-webmentions-to-your-site/), which I followed — first adding the appropriate Mastodon link to my blog template, registering for webmentions.up and Bridgy, then adding the appropriate tags to my template document's `<head>` to record mentions.
Next it was simply a question of rendering the output from the webmentions endpoint.
My next.js api looks like this:
```typescript
export default async function handler(req: any, res: any) {
const KEY_CORYD = process.env.API_KEY_WEBMENTIONS_CORYD_DEV
const KEY_BLOG = process.env.API_KEY_WEBMENTIONS_BLOG_CORYD_DEV
const DOMAIN = req.query.domain
const TARGET = req.query.target
const data = await fetch(
`https://webmention.io/api/mentions.jf2?token=${
DOMAIN === 'coryd.dev' ? KEY_CORYD : KEY_BLOG
}${TARGET ? `&target=${TARGET}` : ''}&per-page=1000`
).then((response) => response.json())
res.json(data)
}
```
I have a pair of keys depending on the mentioned and provides domain, though this is only used on my blog at present. I also support passing through the `target` parameter but don't leverage it at the moment.
This is called on the client side as follows:
```javascript
document.addEventListener('DOMContentLoaded', (event) => {
;(function () {
const formatDate = (date) => {
var d = new Date(date),
month = '' + (d.getMonth() + 1),
day = '' + d.getDate(),
year = d.getFullYear()
if (month.length < 2) month = '0' + month
if (day.length < 2) day = '0' + day
return [month, day, year].join('-')
}
const webmentionsWrapper = document.getElementById('webmentions')
const webmentionsLikesWrapper = document.getElementById('webmentions-likes-wrapper')
const webmentionsBoostsWrapper = document.getElementById('webmentions-boosts-wrapper')
const webmentionsCommentsWrapper = document.getElementById('webmentions-comments-wrapper')
if (webmentionsWrapper && window) {
try {
fetch('https://utils.coryd.dev/api/webmentions?domain=blog.coryd.dev')
.then((response) => response.json())
.then((data) => {
const mentions = data.children
if (mentions.length === 0 || window.location.pathname === '/') {
webmentionsWrapper.remove()
return
}
let likes = ''
let boosts = ''
let comments = ''
mentions.map((mention) => {
if (
mention['wm-property'] === 'like-of' &&
mention['wm-target'].includes(window.location.href)
) {
likes += `<a href="${mention.url}" rel="noopener noreferrer"><img class="avatar" src="${mention.author.photo}" alt="${mention.author.name}" /></a>`
}
if (
mention['wm-property'] === 'repost-of' &&
mention['wm-target'].includes(window.location.href)
) {
boosts += `<a href="${mention.url}" rel="noopener noreferrer"><img class="avatar" src="${mention.author.photo}" alt="${mention.author.name}" /></a>`
}
if (
mention['wm-property'] === 'in-reply-to' &&
mention['wm-target'].includes(window.location.href)
) {
comments += `<div class="webmention-comment"><a href="${
mention.url
}" rel="noopener noreferrer"><div class="webmention-comment-top"><img class="avatar" src="${
mention.author.photo
}" alt="${mention.author.name}" /><div class="time">${formatDate(
mention.published
)}</div></div><div class="comment-body">${
mention.content.text
}</div></a></div>`
}
})
webmentionsLikesWrapper.innerHTML = ''
webmentionsLikesWrapper.insertAdjacentHTML('beforeEnd', likes)
webmentionsBoostsWrapper.innerHTML = ''
webmentionsBoostsWrapper.insertAdjacentHTML('beforeEnd', boosts)
webmentionsCommentsWrapper.innerHTML = ''
webmentionsCommentsWrapper.insertAdjacentHTML('beforeEnd', comments)
webmentionsWrapper.style.opacity = 1
if (likes === '')
document.getElementById('webmentions-likes').innerHTML === ''
if (boosts === '')
document.getElementById('webmentions-boosts').innerHTML === ''
if (comments === '')
document.getElementById('webmentions-comments').innerHTML === ''
if (likes === '' && boosts === '' && comments === '')
webmentionsWrapper.remove()
})
} catch (e) {
webmentionsWrapper.remove()
}
}
})()
})
```
This JavaScript is all quite imperative — it verifies the existence of the appropriate DOM nodes, concatenates templated HTML strings and then injects them into the targeted DOM elements. If there aren't mentions of a supported type, the container node is removed. If there are no mentions, the whole node is removed.
The webmentions HTML shell is as follows:
```html
<div id="webmentions" class="background-purple container">
<div id="webmentions-likes">
<h2><i class="fa-solid fa-fw fa-star"></i> Likes</h2>
<div id="webmentions-likes-wrapper"></div>
</div>
<div id="webmentions-boosts">
<h2><i class="fa-solid fa-fw fa-rocket"></i> Boosts</h2>
<div id="webmentions-boosts-wrapper"></div>
</div>
<div id="webmentions-comments">
<h2><i class="fa-solid fa-fw fa-comment"></i> Comments</h2>
<div id="webmentions-comments-wrapper"></div>
</div>
</div>
```
And there you have it — webmentions loaded client side and updated as they occur. There's an example visible on my post [Automating (and probably overengineering) my /now page](https://blog.coryd.dev/2023/02/automatingandprobablyoverengineeringmy-nowpage#webmentions).

View file

@ -0,0 +1,66 @@
---
title: 'Apple-centric digital privacy tools'
date: '2022-05-31'
draft: false
tags: ['apple', 'privacy', 'ios', 'macos', 'tech']
images: ['/static/images/blog/privacy.jpg']
summary: "A rundown of privacy tools that work well with Apple's technology ecosystem."
---
A rundown of privacy tools that work well with Apple's technology ecosystem.<!-- excerpt -->[^1]
## Overview
<TOCInline toc={props.toc} exclude="Overview" toHeading={2} />
## Email providers
Ubiquitous free email providers profit by mining user data (whether humans are involved or not). Your inbox acts as a key to your digital life and you should avoid using any provider that monetizes its contents.
- [Fastmail](https://ref.fm/u28939392)[^2]: based in Melbourne, Australia Fastmail offers a range of affordably priced plans with a focus on support for open standards (including active development support for [JMAP](https://jmap.io) and the [Cyrus IMAP email server](https://fastmail.blog/open-technologies/why-we-contribute/)). They also [articulate a clear commitment to protecting and respecting your privacy](https://www.fastmail.com/values/) and offer an extensive [rundown of the privacy and security measures they employ on their site](https://www.fastmail.com/privacy-and-security/).
- I would also recommend exploring their [masked email implementation](https://www.fastmail.help/hc/en-us/articles/4406536368911-Masked-Email), which integrates seamlessly with [1Password](https://1password.com) (though using 1Password isn't required).
- [mailbox.org](https://mailbox.org): based in Germany, [mailbox.org](http://mailbox.org) also has [a long history](https://mailbox.org/en/company#our-history) and [commitment to privacy](https://mailbox.org/en/company#our-mission). Their service is reliable, straightforward and fully featured (it's based off of a customized implementation [Open-Xchange](https://www.open-xchange.com)) and supports features like incoming address blocking, PGP support and so forth.
- [Proton Mail](http://protonmail.com): Proton offers a host of encrypted tools, ranging from mail to drive, calendaring and VPN services. They're also the only option in this list that includes end to end encryption. The service is extremely polished and reliable but, it's worth noting, doesn't support access to your email via open standards like IMAP/SMTP without the use of a cumbersome, desktop-only, bridge application.
- [iCloud+](https://support.apple.com/guide/icloud/icloud-overview-mmfc854d9604/icloud): if you're paying for an Apple iCloud subscription you'll get access to the option to add a custom email domain to your account to use with Apple's iCloud Mail service. This is private inasmuch as the data isn't mined for monetization against personalized ads, but is also bare-bones in terms of functionality. It supports IMAP and push notifications on Apple's devices but features like rules, aliases and so forth are extremely limited compared to the previously mentioned providers. This is better than most free providers, but hardly the best option.
- iCloud+ _does_ also offer a [Hide My Email](https://support.apple.com/guide/icloud/what-you-can-do-with-icloud-and-hide-my-email-mme38e1602db/1.0/icloud/1.0) feature to conceal your true email address, much like Fastmail.
## Email apps
- [Apple Mail](https://support.apple.com/mail): Apple's Mail app is simple but also fully featured and reliable to the point of being a bit boring. It also has enhanced privacy features as of iOS 15 and macOS 12 in the form of [Mail Privacy Protection](https://support.apple.com/guide/iphone/use-mail-privacy-protection-iphf084865c7/ios).
- [Canary Mail](https://canarymail.io/): a third-party email with a reasonable price tag and a heavy focus on privacy and security, Canary offers a number of enhancements like read receipts, templates, snoozing, PGP support and calendar/contact integration. The design hews tightly to iOS and macOS platform norms but, naturally, is not quite as tightly integrated as Apple's first-party mail app.
- [Mailmate](https://freron.com/): a long running, highly configurable mail app with a strict focus on IMAP support, Mailmate is an excellent option on macOS and also offers strong support for authoring messages in markdown.
## Safari extensions
- [1Blocker](https://1blocker.com): a highly configurable ad and tracker blocker. Independently maintained and actively developed it also offers a device-level firewall to block trackers embedded in other apps on your device.
- [Super Agent](https://www.super-agent.com): this extension simplifies the process of dealing with the modern web's post-GDPR flood of cookie consent banners by storing your preferences and uniformly applying them to sites that you visit. This allows you to avoid the banners altogether while limiting what's allowed to something as restrictive as, say, functional cookies only.
- [Hush](https://oblador.github.io/hush/): another option to deal with cookie banners by simply blocking the banners outright.
## DNS providers
- [nextDNS](https://nextdns.io/?from=m56mt3z6): I use nextDNS on my home network for basic security and have a more restrictive configuration that heavily filters ads at the DNS level on specific devices. This allows me to block ads, trackers and other annoyances at the DNS level, which covers anything embedded in apps or other services running on my device.
- [Cloudflare 1.1.1.1](https://www.cloudflare.com/learning/dns/what-is-1.1.1.1): Cloudflare's 1.1.1.1 service doesn't offer the same features as nextDNS, but is still preferable to Google's offering or your ISP's default.
- [iCloud Private Relay](https://support.apple.com/en-us/HT212614): Another iCloud+ offering, iCloud Private Relay offers _some_ protection by relaying your traffic in Safari (and Safari only) through a pair of relays to obfuscate your actual IP address and location.
## Password managers
- [1Password:](https://1password.com): I've used 1Password for over 11 years and have yet to have any significant issues with the service. It integrates smoothly with Fastmail to generate masked email addresses, has added support for storing and generating ssh keys and application secrets, supports vault and password sharing and works across platforms. Highly recommended.[^3]
- [Bitwarden](https://bitwarden.com): I haven't made use of Bitwarden, but have heard plenty of positive feedback over the years.
## VPN providers
- [IVPN](https://www.ivpn.net/): my current choice for a VPN provider, it's apps are modern, reliable and offer support for per network default behavior, wireguard, multihop connections and numerous endpoints around the globe.
- [Mullvad](https://mullvad.net/en/): an open source, commercial VPN based in Sweden, Mullvad offers both WireGuard and OpenVPN support.
- [Mozilla](https://www.mozilla.org/en-US/products/vpn/): offered by the non-profit Mozilla Foundation, this is another compelling offering from an organization with a track record of fighting for the open web and preserving user privacy.
For now I've scoped this post to platforms and tools that are central to maintaining your online privacy. But, with that said, each app you use should be examined to determine if and how it fits with your approach towards privacy.
Everything you use is going to glean data from your interactions with it and it's worth considering that tool's stance on privacy, tracking and monetization before investing your time and data into using it.
**Other resources**
My friend Nathaniel Daught has an excellent post with similar resources on his blog [that you should take a look at as well](https://daught.me/blog/privacy-security-tools-2022).
[^1]: This post expands on a [previous post](https://coryd.dev/blog/digital-privacy-tools) with a quick rundown preceded by a link to the New York Times on the same subject.
[^2]: This is my referral link you can skip that and go straight to [fastmail.com](https://fastmail.com).
[^3]: I also generate and store answers to security questions here, rather than providing answers that may be publicly known or derived.

View file

@ -0,0 +1,26 @@
---
title: 'Apple Messages: a tale of woe OR how to fix sync, a crash loop and accept data loss'
date: '2022-04-06'
draft: false
tags: ['apple', 'services']
summary: "Messages.app on macOS began crashing in a loop and here's how I fixed it (and lost data I wasn't attached to)."
---
Apple's Messages app recently started crashing in a loop on my Mac Mini — it would happen every time the app was opened after a 5-10 second delay. Deleting conversations from other devices and letting that change sync over didn't appear to help.<!-- excerpt -->
If you're attached to your message history and have a device where Messages.app isn't crashing, I'd suggest backing up your messages before you try fixing this. Done? Here we go:
Navigate to `~/Library` and delete:
```
Messages
Caches/com.apple.Messages
Caches/com.apple.imfoundation.IMRemoteURLConnectionAgent
Caches/com.apple.MobileSMS
Containers/com.apple.iChat
Containers/com.apple.soagent
```
Log out of/deactivate iMessage on all of your devices. Reboot them. Log back in and hope for the best[^1].
[^1]: They should start syncing again, but it may take a while and the conversations downloaded from iCloud may be a bit disjointed, but the app should stop crashing and work going forward.

View file

@ -0,0 +1,39 @@
---
title: 'Apple Music: a tale of woe'
date: '2022-02-15'
draft: false
tags: ['music', 'apple', 'services']
summary: Last week my Apple Music collection, in as far as I can tell, become corrupted or otherwise unmanageable. This isn't the first issue I've had with the service nor is it the most severe.
---
Last week my Apple Music collection, in as far as I can tell, become corrupted or otherwise unmanageable. This isn't the first issue I've had with the service nor is it the most severe — I gave Apple Music a try right after it launched, remnants of Beats Music and all.<!-- excerpt --> Adding an album to your library was unreliable and tracks would get duplicated if you tried a second time. It ended up overheating my phone battery to the point it could no longer hold a charge. Back to Spotify I went.
I'm the kind of music nerd that likes to meticulously manage genre tags, trim extraneous strings out of track and album names and update album artwork[^1]. Apple Music is the only streaming service that supports importing your own music to supplement their catalog while also editing their metadata to match. I've been doing this for a few years now and all was well and good as my music collection grew.
A few weeks ago I read through a [Brooklyn Vegan](https://brooklynvegan.com) on the best hardcore releases of 2021, added a few to my collection, tagged them and queued them. No problem. I don't end up liking all of them[^2]. I go back and notice the tags are all back to Apple's defaults (no big deal, this happens occasionally) and proceed to delete the albums I don't like. Fast forward to the next day — I sit down, scroll through Recently Added to queue up something new and everything is right back to where it was. I try deleting the same albums from the iOS app and it works briefly before they reappeared. Great.
My next steps were pretty standard, escalating, troubleshooting:
- [x] Log out of Apple Music on all devices
- [x] Reboot
- [x] Log in
Welcome back _Glow On_!
- [x] Reset my Apple Music library[^3]
- [x] Reconstruct my collection[^4]
- [x] Notice that I _still_ can't update metadata and Apple fingerprints your tracks, tries to overwrite the metadata and creates duplicate tracks if there's the _slightest_ mismatch. Notice that these duplicates can't be deleted.
So, here I am: I've had swapped a phone after the service launched and cooked the battery. I gave it a second try, it worked for a while exactly how I'd liked — as a cloud locker with a supplemental catalog of music I was less invested in — and then it hit a wall.
I had a pretty large library, I tweaked the data and imported external data. I imagine that's tough to sync and I imagine matching imported music helps with deduplication and performance. I would venture to guess that my usage lives in what would be considered an outlier or edge case, I get that. It's still disappointing to see the service fall on its face so spectacularly.
My music collection, for all intents and purposes, was broken in Apple Music. I took a brief look around, knowing that I already owned the vast majority of music I was _actually_ invested in and found [Doppler](http://brushedtype.co/doppler/). I downloaded the trial, imported my music, let Dropbox backup `~/Music`, signed out of Apple Music and deleted the app. I can update metadata and there's no streaming hiccups when Apple Music mysteriously pulls a track off my phone. I no longer have to maintain a smart playlist to track what falls out of Apple's catalog either.
I likely should have been listening to and managing music this way all along and there's a refreshing clarity to knowing exactly what's in your finite collection and what you actually _want_ to be in that collection. I know what I enjoy, it's on my phone and there's no more cycling through endless playlists and recommendations. Apple Music is convenient, but it's inconsistent and unreliable. I don't think I'll be back.[^5]
[^1]: I'm looking at you Audiotree Live.
[^2]: I've seen folks raving about the new Turnstile record and that's rad, but I don't get it. I'm so sorry.
[^3]: There's a button to do this in the Mac App store app and it doesn't work. I throws a generic exception telling you to try later — use the one in the Music app.
[^4]: Cool — an opportunity to get introspective and pare back what I actually care to listen to.
[^5]: This prompted me to move the last of my import data, my photos, off of Apple's services — my music library is one thing, having the same happen to my photos would be devastating. They're now sitting in Google Photos, getting mirrored to Dropbox and perhaps off to BackBlaze. Is this an overreaction? Maybe — but I've also had a tab Safari claims is open on my Mac Mini for 3-4 months now. Syncing is hard and the evidence leads me to believe the service implementation may not be that reliable.

View file

@ -0,0 +1,17 @@
---
title: Automatic Feedbin subscription backups
date: '2014-02-27'
tags: ['automation']
draft: false
summary: A few weeks ago I switched from Fever. to Feedbin. I had been using Fever on a shared hosting account and, over the long term, was proving to be slower than I had expected it to be.
---
A few weeks ago I switched from [Fever.](http://feedafever.com/ 'Fever° Red hot. Well read.') to [Feedbin](https://feedbin.me/ 'Feedbin'). I had been using Fever on a shared hosting account and, over the long term, was proving to be slower than I had expected it to be.<!-- excerpt --> So far Feedbin has proven to be considerably faster than my old Fever install and appears to be more actively developed (I've also been able to use Jared Sinclair's [Unread](http://jaredsinclair.com/unread/ 'Unread — An RSS Reader') — it's fantastic).
I plan on sticking with Feedbin as my RSS service, but also wanted to make sure I kept a backup of all the feeds I subscribe to just in case anything happens to change. Rather than manually exporting a JSON backup of my feeds on a regular basis, I threw together the following shell script to download the JSON file via Feedbin's API and save it to Dropbox:
```bash
"curl -u 'example@example.com:password' https://api.feedbin.me/v2/subscriptions.json -o ~/Dropbox/Backups/Feedbin/feedbin-subscriptions.json"
```
I have the above script saved and used [Lingon](http://www.peterborgapps.com/lingon/ 'Lingon - Peter Borg Apps') to schedule it to run automatically once a week, alleviating the need for me take the time to back up my RSS subscriptions by hand. To use the script, you simply need to drop in your Feedbin credentials, save it wherever you'd like and then add it and schedule it to run via Lingon.

View file

@ -0,0 +1,391 @@
---
title: 'Automating (and probably overengineering) my /now page'
date: '2023-02-06'
draft: false
tags: ['automation', 'development', 'nextjs', 'javascript']
summary: 'omg.lol (where I point my domain) and host most of my site content recently launched support for /now pages.'
---
[omg.lol](https://home.omg.lol) (where I point my domain) and host most of my site content [recently launched support for /now pages](https://omglol.news/2023/01/16/now-pages-are-here).<!-- excerpt -->
**[nownownow.com](https://nownownow.com)**
> ...a link that says “**now**” goes to a page that tells you **what this person is focused on at this point in their life.** For short, we call it a “now page”.
This page can be updated manually but, as with just about everything offered by omg.lol, there's an API to submit updates to the page. I already blog infrequently and knew I would fail to manually update the page frequently, which presented an opportunity to automate updates to the page. My page is available at [coryd.dev/now](https://coryd.dev/now).
Borrowing from [Robb Knight](https://rknight.me) I started by creating a paste containing `yaml` with static text to fill out the top of my now page with brief details about family, work and hobbies (or lack thereof).
From there, I turned to the myriad content-based services I use to track what I'm listening to, what TV and movies I'm watching and what books I'm reading to source updates from.
I'm already exposing my most recently listened tracks and actively read books on my omg.lol home page/profile. This data is fetched from a [next.js](https://nextjs.org) application hosted over at [Vercel](https://vercel.com) that exposes a number of endpoints. For my music listening data, I'm using a route at `/api/music` that looks like this:
```typescript
export default async function handler(req: any, res: any) {
const KEY = process.env.API_KEY_LASTFM
const METHODS: { [key: string]: string } = {
default: 'user.getrecenttracks',
albums: 'user.gettopalbums',
artists: 'user.gettopartists',
}
const METHOD = METHODS[req.query.type] || METHODS['default']
const data = await fetch(
`http://ws.audioscrobbler.com/2.0/?method=${METHOD}&user=cdme_&api_key=${KEY}&limit=${
req.query.limit || 20
}&format=${req.query.format || 'json'}&period=${req.query.period || 'overall'}`
).then((response) => response.json())
res.json(data)
}
```
This API takes a type parameter and passes through several of Last.fm's stock parameters to allow it to be reused for my now listening display and the `/now` page.
Last.fm's API returns album images, but no longer returns artist images. To solve this, I've created an `/api/media` endpoint that checks for an available, static artist image and returns a placeholder if that check yields a 404. If a 404 is returned, I'm logging the missing artist name to a paste at omg.lol's paste.lol service:
```typescript
import siteMetadata from '@/data/siteMetadata'
export default async function handler(req: any, res: any) {
const env = process.env.NODE_ENV
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
const ARTIST = req.query.artist
const ALBUM = req.query.album
const MEDIA = ARTIST ? 'artists' : 'albums'
const MEDIA_VAL = ARTIST ? ARTIST : ALBUM
const data = await fetch(`${host}/media/${MEDIA}/${MEDIA_VAL}.jpg`)
.then((response) => {
if (response.status === 200) return `${host}/media/${MEDIA}/${MEDIA_VAL}.jpg`
fetch(
`${host}/api/omg/paste-edit?paste=404-images&editType=append&content=${MEDIA_VAL}`
).then((response) => response.json())
return `${host}/media/404.jpg`
})
.then((image) => image)
res.redirect(data)
}
```
For my reading data, Oku.club exposes an [RSS feed](https://en.wikipedia.org/wiki/RSS) for all collection views. I'm using [@extractus/feed-extractor](https://www.npmjs.com/package/@extractus/feed-extractor) to transform that RSS feed to JSON and expose it as follows:
```typescript
import { extract } from '@extractus/feed-extractor'
import siteMetadata from '@/data/siteMetadata'
export default async function handler(req: any, res: any) {
const env = process.env.NODE_ENV
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
const url = `${host}/feeds/books`
const result = await extract(url)
res.json(result)
}
```
For television watched data, Trakt offers an RSS feed of my watched history, which is served as an endpoint as follows:
```typescript
import { extract } from '@extractus/feed-extractor'
import siteMetadata from '@/data/siteMetadata'
export default async function handler(req: any, res: any) {
const KEY = process.env.API_KEY_TRAKT
const env = process.env.NODE_ENV
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
const url = `${host}/feeds/tv?slurm=${KEY}`
const result = await extract(url, {
getExtraEntryFields: (feedEntry) => {
return {
image: feedEntry['media:content']['@_url'],
thumbnail: feedEntry['media:thumbnail']['@_url'],
}
},
})
res.json(result)
}
```
For movie data from Letterboxd we are, again, looking at transforming my profile RSS feed:
```typescript
import { extract } from '@extractus/feed-extractor'
import siteMetadata from '@/data/siteMetadata'
export default async function handler(req: any, res: any) {
const env = process.env.NODE_ENV
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
const url = `${host}/feeds/movies`
const result = await extract(url)
res.json(result)
}
```
This all comes together in yeat another, perhaps overwrought, endpoint at `/api/now`. Calls to this endpoint are authenticated with a bearer code and each endpoint response is configured to return JSON, Markdown and, in the case of sections with more complex layouts (music artists and albums), HTML. The contents of that endpoint are as follows:
```typescript
import jsYaml from 'js-yaml'
import siteMetadata from '@/data/siteMetadata'
import { listsToMarkdown } from '@/utils/transforms'
import { getRandomIcon } from '@/utils/icons'
import { nowResponseToMarkdown } from '@/utils/transforms'
import { ALBUM_DENYLIST } from '@/utils/constants'
export default async function handler(req: any, res: any) {
const env = process.env.NODE_ENV
const { APP_KEY_OMG, API_KEY_OMG } = process.env
const ACTION_KEY = req.headers.authorization?.split(' ')[1]
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
try {
if (ACTION_KEY === APP_KEY_OMG) {
const now = await fetch('https://api.omg.lol/address/cory/pastebin/now.yaml')
.then((res) => res.json())
.then((json) => {
const now = jsYaml.load(json.response.paste.content)
Object.keys(jsYaml.load(json.response.paste.content)).forEach((key) => {
now[key] = listsToMarkdown(now[key])
})
return { now }
})
const books = await fetch(`${host}/api/books`)
.then((res) => res.json())
.then((json) => {
const data = json.entries
.slice(0, 5)
.map((book: { title: string; link: string }) => {
return {
title: book.title,
link: book.link,
}
})
return {
json: data,
md: data
.map((d: any) => {
return `- [${d.title}](${d.link}) {${getRandomIcon('books')}}`
})
.join('\n'),
}
})
const movies = await fetch(`${host}/api/movies`)
.then((res) => res.json())
.then((json) => {
const data = json.entries
.slice(0, 5)
.map((movie: { title: string; link: string; description: string }) => {
return {
title: movie.title,
link: movie.link,
desc: movie.description,
}
})
return {
json: data,
md: data
.map((d: any) => {
return `- [${d.title}](${d.link}): ${d.desc} {${getRandomIcon(
'movies'
)}}`
})
.join('\n'),
}
})
const tv = await fetch(`${host}/api/tv`)
.then((res) => res.json())
.then((json) => {
const data = json.entries
.splice(0, 5)
.map(
(episode: {
title: string
link: string
image: string
thumbnail: string
}) => {
return {
title: episode.title,
link: episode.link,
image: episode.image,
thumbnail: episode.thumbnail,
}
}
)
return {
json: data,
html: data
.map((d: any) => {
return `<div class="container"><a href=${d.link} title='${d.title} by ${d.artist}'><div class='cover'></div><div class='details'><div class='text-main'>${d.title}</div></div><img src='${d.thumbnail}' alt='${d.title}' /></div></a>`
})
.join('\n'),
md: data
.map((d: any) => {
return `- [${d.title}](${d.link}) {${getRandomIcon('tv')}}`
})
.join('\n'),
}
})
const musicArtists = await fetch(
`https://utils.coryd.dev/api/music?type=artists&period=7day&limit=8`
)
.then((res) => res.json())
.then((json) => {
const data = json.topartists.artist.map((a: any) => {
return {
artist: a.name,
link: `https://rateyourmusic.com/search?searchterm=${encodeURIComponent(
a.name
)}`,
image: `${host}/api/media?artist=${a.name
.replace(/\s+/g, '-')
.toLowerCase()}`,
}
})
return {
json: data,
html: data
.map((d: any) => {
return `<div class="container"><a href=${d.link} title='${d.title} by ${d.artist}'><div class='cover'></div><div class='details'><div class='text-main'>${d.artist}</div></div><img src='${d.image}' alt='${d.artist}' /></div></a>`
})
.join('\n'),
md: data
.map((d: any) => {
return `- [${d.artist}](${d.link}) {${getRandomIcon('music')}}`
})
.join('\n'),
}
})
const musicAlbums = await fetch(
`https://utils.coryd.dev/api/music?type=albums&period=7day&limit=8`
)
.then((res) => res.json())
.then((json) => {
const data = json.topalbums.album.map((a: any) => ({
title: a.name,
artist: a.artist.name,
link: `https://rateyourmusic.com/search?searchterm=${encodeURIComponent(
a.name
)}`,
image: !ALBUM_DENYLIST.includes(a.name.replace(/\s+/g, '-').toLowerCase())
? a.image[a.image.length - 1]['#text']
: `${host}/api/media?album=${a.name
.replace(/\s+/g, '-')
.toLowerCase()}`,
}))
return {
json: data,
html: data
.map((d: any) => {
return `<div class="container"><a href=${d.link} title='${d.title} by ${d.artist}'><div class='cover'></div><div class='details'><div class='text-main'>${d.title}</div><div class='text-secondary'>${d.artist}</div></div><img src='${d.image}' alt='${d.title} by ${d.artist}' /></div></a>`
})
.join('\n'),
md: data
.map((d: any) => {
return `- [${d.title}](${d.link}) by ${d.artist} {${getRandomIcon(
'music'
)}}`
})
.join('\n'),
}
})
fetch('https://api.omg.lol/address/cory/now', {
method: 'post',
headers: {
Authorization: `Bearer ${API_KEY_OMG}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
content: nowResponseToMarkdown({
now,
books,
movies,
tv,
music: {
artists: musicArtists,
albums: musicAlbums,
},
}),
listed: 1,
}),
})
res.status(200).json({ success: true })
} else {
res.status(401).json({ success: false })
}
} catch (err) {
res.status(500).json({ success: false })
}
}
```
This endpoint also supports a denylist for albums returned from last.fm that might not be appropriate to display in polite company — if an album is in the denylist we look for an alternate, statically hosted cover or serve our 404 placeholder if one isn't readily available.
For items displayed from Markdown I'm attaching a random FontAwesome icon (e.g. `getRandomIcon('music')`):
```typescript
export const getRandomIcon = (type: string) => {
const icons = {
books: ['book', 'book-bookmark', 'book-open', 'book-open-reader', 'bookmark'],
music: ['music', 'headphones', 'record-vinyl', 'radio', 'guitar', 'compact-disc'],
movies: ['film', 'display', 'video', 'ticket'],
tv: ['tv', 'display', 'video'],
}
return icons[type][Math.floor(Math.random() * (icons[type].length - 1 - 0))]
}
```
As the final step to wrap this up, calls to `/api/now` are made every 8 hours using a GitHub action:
```yaml
name: scheduled-cron-job
on:
schedule:
- cron: '0 */8 * * *'
jobs:
cron:
runs-on: ubuntu-latest
steps:
- name: scheduled-cron-job
run: |
curl -X POST 'https://utils.coryd.dev/api/now' \
-H 'Authorization: Bearer ${{ secrets.ACTION_KEY }}'
```
This endpoint can also be manually called using another workflow:
```yaml
name: manual-job
on: [workflow_dispatch]
jobs:
cron:
runs-on: ubuntu-latest
steps:
- name: manual-job
run: |
curl -X POST 'https://utils.coryd.dev/api/now' \
-H 'Authorization: Bearer ${{ secrets.ACTION_KEY }}'
```
So far this works seamlessly — if I want to update or add static content I can do so via my yaml paste at paste.lol and the change will roll out in due time.
Questions? Comments? Feel free to get in touch:
- [Email](mailto:hi@coryd.dev)
- [Mastodon](https://social.lol/@cory)
---
Robb Knight has a [great post](https://rknight.me/automating-my-now-page/) on his process for automating his `/now` page using [Eleventy](https://www.11ty.dev) and mirroring it to omg.lol.

View file

@ -0,0 +1,110 @@
---
title: 'Automating email cleanup in Gmail'
date: '2022-03-28'
draft: false
tags: ['gmail', 'automation']
summary: "Lately I've been leaning into automating the cleanup of email I receive in Gmail using a combination of Inbox-era categories that the application still exposes via search and Google Apps Script."
---
Lately I've been leaning into automating the cleanup of email I receive in Gmail using a combination of Inbox-era categories that the application still exposes via search and [Google Apps Script](https://www.google.com/script/start/).<!-- excerpt -->
I wasn't using Gmail when Inbox was available (I'm sure I missed out) and know not all of the most beloved features have been migrated over to Gmail proper. That said, there _are_ some handy filters that didn't ascend to Gmail's tabbed inbox interface but are still available to create rules against[^1].
I've created filter rules leveraging all of these legacy filters to automatically categorize messages the same way the current tabs do. These rules look like the following:
```
# emails gmail categorizes as travel related
Matches: category:travel
Do this: Apply label "Traveling"
# emails gmail categorizes as receipts
Matches: category:purchases
Do this: Apply label "Receipts
# emails gmail categorizes as finance related
Matches: category:finance
Do this: Apply label "Financial"
# emails gmail categorizes as reservations
Matches: category:reservations
Do this: Apply label "Reservations"
```
Expanding on this, I also have a few forwarding addresses in place to conditionally handle other types of messages. First up, I use some compiled search terms to redirect emails indicating something I've ordered has shipped off to [Deliveries.app](https://junecloud.com). That rule looks like this:
```
Matches: subject:({"has shipped" "was shipped" "on its way" "tracking number" "shipment from order" "order shipped confirmation" "Shipped:"})
Do this: Skip Inbox, Mark as read, Apply label "Deliveries", Forward to <UNIQUE-ID>@junecloud.com
```
For newsletters, I sign up using Gmail's plus addressing scheme to automatically label them as `newsletters`[^2]:
```
Matches: to:(cory.dransfeldt+newsletters@gmail.com)
Do this: Skip Inbox, Mark as read, Apply label "Newsletters", Forward to <UNIQUE-ID>@newsletters.feedbin.com
```
For both newsletters and deliveries this leaves me with a fair amount of archived mail that arguably decreases in or loses all value over time[^3].
I take a similar approach to actionable/alert-style messages:
```
Matches: <SUPER IMPORTANT CONDITION HERE>
Do this: Apply label "Alerts", Forward to <UNIQUE-ID>@todoist.net, Mark it as important, Categorize as Primary
```
This rule leaves alerts prominently in my inbox and as an actionable task in [Todoist](https://todost.com). Keeping the email in focus and in my Todoist inbox is, arguably, redundant but helps keep the issue front and center until it's resolved.
### On to Google Apps script
To clean up these various transactional messages I use several different Google Apps Script scripts. Each runs twice a month on the 1st and 15th and is targeted at cleaning up a category of messages. These runs are scheduled using the `Time-driven` event source and the `Month timer` time based trigger.
For example, to clear old newsletters, I use the following:
```javascript
function batchDeleteEmail() {
var SEARCH_QUERY = 'label:newsletters -label:inbox'
var batchSize = 100
var searchSize = 400
var threads = GmailApp.search(SEARCH_QUERY, 0, searchSize)
for (j = 0; j < threads.length; j += batchSize) {
GmailApp.moveThreadsToTrash(threads.slice(j, j + batchSize))
}
}
```
This rule iteratively deletes all messages with the label `newsletters`, omitting messages that, for whatever reason, might have landed in my inbox.
The rules for deliveries and alerts operate in very much the same way, but with a different query for each:
**Deliveries (omitting Gmail-identified receipts and the inbox)**
```
'label:deliveries -label:inbox -label:receipts'
```
**Alerts (omitting the inbox)**
```
'label:alerts -label:inbox'
```
Unrelated to cleanup, I also mark any unread emails in my archive as read, with this script running every minute using the `Time-driven` event source, `Minute timer` and is executed every minute (heavy-handed perhaps, but the error-rate for this has only been 0.02%):
```javascript
function markArchivedAsRead() {
var SEARCH_QUERY = 'label:unread -label:inbox'
var batchSize = 100
var searchSize = 400
var threads = GmailApp.search(SEARCH_QUERY, 0, searchSize)
for (j = 0; j < threads.length; j += batchSize) {
GmailApp.markThreadsRead(threads.slice(j, j + batchSize))
}
}
```
I have given some thought to refactoring my cleanup scripts such that the batch delete consumes an array of the individual search queries, iterating over them much like it does the threads it's operating on but, at that point, I'd be looking at a loop over the argument and then over the threads in a child loop when separate script functions can run without that being a concern.
[^1]: I am puzzled that Forums made the cut as a featured option.
[^2]: Don't email me via Feedbin. I'll miss it or it'll just be annoying.
[^3]: I care when something ships, I don't care to reference the tracking info months later.

View file

@ -0,0 +1,255 @@
---
title: 'Automating RSS syndication and sharing with Next.js and GitHub'
date: 2023-02-23
draft: false
tags: ['nextjs', 'rss', 'automation', 'github']
summary: 'I wrote a basic syndication tool in Next.js to automate sharing items from configured RSS feeds to Mastodon.'
---
I wrote a basic syndication tool in Next.js to automate sharing items from configured RSS feeds to Mastodon. This tool works by leveraging a few basic configurations, the Mastodon API and a (reasonably) lightweight script that creates a JSON cache when initialized and posts new items on an hourly basis.<!-- excerpt -->
The script that handles this functionality lives at `lib/syndicate/index.ts`:
```typescript
import { toPascalCase } from '@/utils/formatters'
import { extract, FeedEntry } from '@extractus/feed-extractor'
import { SERVICES, TAGS } from './config'
import createMastoPost from './createMastoPost'
export default async function syndicate(init?: string) {
const TOKEN_CORYDDEV_GISTS = process.env.TOKEN_CORYDDEV_GISTS
const GIST_ID_SYNDICATION_CACHE = '406166f337b9ed2d494951757a70b9d1'
const GIST_NAME_SYNDICATION_CACHE = 'syndication-cache.json'
const CLEAN_OBJECT = () => {
const INIT_OBJECT = {}
Object.keys(SERVICES).map((service) => (INIT_OBJECT[service] = []))
return INIT_OBJECT
}
async function hydrateCache() {
const CACHE_DATA = CLEAN_OBJECT()
for (const service in SERVICES) {
const data = await extract(SERVICES[service])
const entries = data?.entries
entries.map((entry: FeedEntry) => CACHE_DATA[service].push(entry.id))
}
await fetch(`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`, {
method: 'PATCH',
headers: {
Authorization: `Bearer ${TOKEN_CORYDDEV_GISTS}`,
'Content-Type': 'application/vnd.github+json',
},
body: JSON.stringify({
gist_id: GIST_ID_SYNDICATION_CACHE,
files: {
'syndication-cache.json': {
content: JSON.stringify(CACHE_DATA),
},
},
}),
})
.then((response) => response.json())
.catch((err) => console.log(err))
}
const DATA = await fetch(`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`).then(
(response) => response.json()
)
const CONTENT = DATA?.files[GIST_NAME_SYNDICATION_CACHE].content
// rewrite the sync data if init is reset
if (CONTENT === '' || init === 'true') hydrateCache()
if (CONTENT && CONTENT !== '' && !init) {
const existingData = await fetch(
`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`
).then((response) => response.json())
const existingContent = JSON.parse(existingData?.files[GIST_NAME_SYNDICATION_CACHE].content)
for (const service in SERVICES) {
const data = await extract(SERVICES[service], {
getExtraEntryFields: (feedEntry) => {
return {
tags: feedEntry['cd:tags'],
}
},
})
const entries: (FeedEntry & { tags?: string })[] = data?.entries
if (!existingContent[service].includes(entries[0].id)) {
let tags = ''
if (entries[0].tags) {
entries[0].tags
.split(',')
.forEach((a, index) =>
index === 0
? (tags += `#${toPascalCase(a)}`)
: (tags += ` #${toPascalCase(a)}`)
)
tags += ` ${TAGS[service]}`
} else {
tags = TAGS[service]
}
existingContent[service].push(entries[0].id)
createMastoPost(`${entries[0].title} ${entries[0].link} ${tags}`)
await fetch(`https://api.github.com/gists/${GIST_ID_SYNDICATION_CACHE}`, {
method: 'PATCH',
headers: {
Authorization: `Bearer ${TOKEN_CORYDDEV_GISTS}`,
'Content-Type': 'application/vnd.github+json',
},
body: JSON.stringify({
gist_id: GIST_ID_SYNDICATION_CACHE,
files: {
'syndication-cache.json': {
content: JSON.stringify(existingContent),
},
},
}),
})
.then((response) => response.json())
.catch((err) => console.log(err))
}
}
}
}
```
We start off with an optional `init` parameter that can be passed into our `syndicate` function to hydrate our syndication cache — the structure of this cache is essentially `SERIVCE_KEY: string[]` where `string[]` contains RSS post IDs. Now, given that Vercel is intended as front end hosting, I needed a reasonably simple and reliable solution for hosting a simple JSON object. I explored and didn't want to involve a full-fledged database or storage solution and wasn't terribly interested in dealing with S3 or B2 for this purpose so I, instead, went with a "secret" GitHub gist[^1] and leveraged the GitHub API for storage. At each step of the [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) process in this script we make a call to the GitHub API using a token for authentication, deal with the returned JSON and go on our merry way.
Once the cache is hydrated the script will check the feeds available in `lib/syndicate/config.ts` and post the most recent item if it does not exist in the cache and then add it to said cache. The configured services are simply:
```typescript
export const SERVICES = {
'coryd.dev': 'https://coryd.dev/feed.xml',
glass: 'https://glass.photo/coryd/rss',
letterboxd: 'https://letterboxd.com/cdme/rss/',
}
```
As we iterate through this object we also attach tags specific to each service using an object shaped exactly like `SERVICES` in `config.ts`:
```typescript
export const TAGS = {
'coryd.dev': '#Blog',
glass: '#Photo #Glass',
letterboxd: '#Movie #Letterboxd',
}
```
This is partly for discovery and partly a consistent way for folks to filter my automated nonsense should they so choose. The format of Glass and Letterboxd are consistent and the tags are as well — for posts from my site (like this one 👋🏻) I start with `#Blog` and have also modified the structure of my RSS feed to expose the tags I add to each post. The feed is generated by a script that runs at build time called `generate-rss.ts` which looks like:
```typescript
import { escape } from '@/lib/utils/htmlEscaper'
import siteMetadata from '@/data/siteMetadata'
import { PostFrontMatter } from 'types/PostFrontMatter'
const generateRssItem = (post: PostFrontMatter) => `
<item>
<guid>${siteMetadata.siteUrl}/blog/${post.slug}</guid>
<title>${escape(post.title)}</title>
<link>${siteMetadata.siteUrl}/blog/${post.slug}</link>
${post.summary && `<description>${escape(post.summary)}</description>`}
<pubDate>${new Date(post.date).toUTCString()}</pubDate>
<author>${siteMetadata.email} (${siteMetadata.author})</author>
${post.tags && post.tags.map((t) => `<category>${t}</category>`).join('')}
<cd:tags>${post.tags}</cd:tags>
</item>
`
const generateRss = (posts: PostFrontMatter[], page = 'feed.xml') => `
<rss version="2.0"
xmlns:cd="https://coryd.dev/rss"
xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>${escape(siteMetadata.title)}</title>
<link>${siteMetadata.siteUrl}/blog</link>
<description>${escape(siteMetadata.description.default)}</description>
<language>${siteMetadata.language}</language>
<managingEditor>${siteMetadata.email} (${siteMetadata.author})</managingEditor>
<webMaster>${siteMetadata.email} (${siteMetadata.author})</webMaster>
<lastBuildDate>${new Date(posts[0].date).toUTCString()}</lastBuildDate>
<atom:link href="${
siteMetadata.siteUrl
}/${page}" rel="self" type="application/rss+xml"/>
${posts.map(generateRssItem).join('')}
</channel>
</rss>
`
export default generateRss
```
I've added a new namespace to the parent `<rss...>` tag called `cd`[^2] — the declaration points to a page at this site that (very) briefly explains the purpose, I then created a `<cd:tags>` field that exposes a comma delimited list of post tags.
Back in `syndicate/index.ts`, this field is accessed when the RSS feed is parsed:
```typescript
const data = await extract(SERVICES[service], {
getExtraEntryFields: (feedEntry) => {
return {
tags: feedEntry['cd:tags'],
}
},
})
...
let tags = ''
if (entries[0].tags) {
entries[0].tags
.split(',')
.forEach((a, index) =>
index === 0
? (tags += `#${toPascalCase(a)}`)
: (tags += ` #${toPascalCase(a)}`)
)
tags += ` ${TAGS[service]}`
} else {
tags = TAGS[service]
}
```
Tags get transformed to Pascal case, prepended with `#` and sent off to be posted to Mastodon along with the static service-specific tags.
The function that posts content to Mastodon is as simple as the following:
```typescript
import { MASTODON_INSTANCE } from './config'
const KEY = process.env.API_KEY_MASTODON
const createMastoPost = async (content: string) => {
const formData = new FormData()
formData.append('status', content)
const res = await fetch(`${MASTODON_INSTANCE}/api/v1/statuses`, {
method: 'POST',
headers: {
Accept: 'application/json',
Authorization: `Bearer ${KEY}`,
},
body: formData,
})
return res.json()
}
export default createMastoPost
```
Back at GitHub, this is all kicked off every hour on the hour using the following workflow:
```yaml
name: scheduled-cron-job
on:
schedule:
- cron: '0 * * * *'
jobs:
cron:
runs-on: ubuntu-latest
steps:
- name: scheduled-cron-job
run: |
curl -X POST 'https://coryd.dev/api/syndicate' \
-H 'Authorization: Bearer ${{ secrets.VERCEL_SYNDICATE_KEY }}'
```
Now, as I post things elsewhere, they'll make their way back to Mastodon with a simple title, link and tag set. Read them if you'd like, or filter them out altogether.
[^1]: It's secret inasmuch as it's obscured and, hence, not secured (which is also why `syndicate.ts` includes the gist ID directly) — it's all public post IDs, so peruse as one sees fit.
[^2]: Not very creative, I know.

View file

@ -0,0 +1,363 @@
---
title: 'Building a now page using Next.js and social APIs'
date: 2023-02-20
draft: false
tags: ['nextjs', 'web development', 'react', 'api']
summary: 'A rundown of how I developed my now page using next.js and a variety of social APIs.'
---
With my personal site now sitting at Vercel and written in Next.js I decided to rework my [now](https://coryd.dev/now) page by leveraging a variety of social APIs. I kicked things off by looking through various platforms I use regularly and tracking down those that provide either API access or RSS feeds. For those with APIs I wrote code to access my data via said APIs, for those with feeds only I've leveraged [@extractus/feed-extractor](https://www.npmjs.com/package/@extractus/feed-extractor) to transform them to JSON responses.<!-- excerpt -->
The `/now` template in my `pages` directory looks like the following:
```jsx
import siteMetadata from '@/data/siteMetadata'
import loadNowData from '@/lib/now'
import { useJson } from '@/hooks/useJson'
import Link from 'next/link'
import { PageSEO } from '@/components/SEO'
import { Spin } from '@/components/Loading'
import {
MapPinIcon,
CodeBracketIcon,
MegaphoneIcon,
CommandLineIcon,
} from '@heroicons/react/24/solid'
import Status from '@/components/Status'
import Albums from '@/components/media/Albums'
import Artists from '@/components/media/Artists'
import Reading from '@/components/media/Reading'
import Movies from '@/components/media/Movies'
import TV from '@/components/media/TV'
const env = process.env.NODE_ENV
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
export async function getStaticProps() {
return {
props: await loadNowData('status,artists,albums,books,movies,tv'),
revalidate: 3600,
}
}
export default function Now(props) {
const { response, error } = useJson(`${host}/api/now`, props)
const { status, artists, albums, books, movies, tv } = response
if (error) return null
if (!response) return <Spin className="my-2 flex justify-center" />
return (
<>
<PageSEO
title={`Now - ${siteMetadata.author}`}
description={siteMetadata.description.now}
/>
<div className="divide-y divide-gray-200 dark:divide-gray-700">
<div className="space-y-2 pt-6 pb-8 md:space-y-5">
<h1 className="text-3xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-4xl sm:leading-10 md:text-6xl md:leading-14">
Now
</h1>
</div>
<div className="pt-12">
<h3 className="text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-2xl sm:leading-10 md:text-4xl md:leading-14">
Currently
</h3>
<div className="pl-5 md:pl-10">
<Status status={status} />
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
<MapPinIcon className="mr-1 inline h-6 w-6" />
Living in Camarillo, California with my beautiful family, 4 rescue dogs and
a guinea pig.
</p>
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
<CodeBracketIcon className="mr-1 inline h-6 w-6" />
Working at <Link
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
href="https://hashicorp.com"
target="_blank"
rel="noopener noreferrer"
>
HashiCorp
</Link>
</p>
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
<MegaphoneIcon className="mr-1 inline h-6 w-6" />
Rooting for the{` `}
<Link
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
href="https://lakers.com"
target="_blank"
rel="noopener noreferrer"
>
Lakers
</Link>
, for better or worse.
</p>
</div>
<h3 className="pt-6 text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-2xl sm:leading-10 md:text-4xl md:leading-14">
Making
</h3>
<div className="pl-5 md:pl-10">
<p className="mt-2 text-lg leading-7 text-gray-500 dark:text-gray-100">
<CommandLineIcon className="mr-1 inline h-6 w-6" />
Hacking away on random projects like this page, my <Link
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
href="/blog"
passHref
>
blog
</Link> and whatever else I can find time for.
</p>
</div>
<Artists artists={artists} />
<Albums albums={albums} />
<Reading books={books} />
<Movies movies={movies} />
<TV tv={tv} />
<p className="pt-8 text-center text-xs text-gray-900 dark:text-gray-100">
(This is a{' '}
<Link
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
href="https://nownownow.com/about"
target="_blank"
rel="noopener noreferrer"
>
now page
</Link>
, and if you have your own site, <Link
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
href="https://nownownow.com/about"
target="_blank"
rel="noopener noreferrer"
>
you should make one, too
</Link>
.)
</p>
</div>
</div>
</>
)
}
```
You'll see that the top section is largely static, with text styled using Tailwind and associated icons from the [Hero Icons](https://heroicons.com) package. We're also exporting an instance of `getStaticProps` that's revalidated every hour and makes a call to a method exposed from my `lib` directory called `loadNowData`. `loadNowData` takes a comma delimited string as an argument to indicate which properties I want returned in the composed object from that method[^1]. The method looks like this[^2]:
```typescript
import { extract } from '@extractus/feed-extractor'
import siteMetadata from '@/data/siteMetadata'
import { Albums, Artists, Status, TransformedRss } from '@/types/api'
import { Tracks } from '@/types/api/tracks'
export default async function loadNowData(endpoints?: string) {
const selectedEndpoints = endpoints?.split(',') || null
const TV_KEY = process.env.API_KEY_TRAKT
const MUSIC_KEY = process.env.API_KEY_LASTFM
const env = process.env.NODE_ENV
let host = siteMetadata.siteUrl
if (env === 'development') host = 'http://localhost:3000'
let statusJson = null
let artistsJson = null
let albumsJson = null
let booksJson = null
let moviesJson = null
let tvJson = null
let currentTrackJson = null
// status
if ((endpoints && selectedEndpoints.includes('status')) || !endpoints) {
const statusUrl = 'https://api.omg.lol/address/cory/statuses/'
statusJson = await fetch(statusUrl)
.then((response) => response.json())
.catch((error) => {
console.log(error)
return {}
})
}
// artists
if ((endpoints && selectedEndpoints.includes('artists')) || !endpoints) {
const artistsUrl = `http://ws.audioscrobbler.com/2.0/?method=user.gettopartists&user=cdme_&api_key=${MUSIC_KEY}&limit=8&format=json&period=7day`
artistsJson = await fetch(artistsUrl)
.then((response) => response.json())
.catch((error) => {
console.log(error)
return {}
})
}
// albums
if ((endpoints && selectedEndpoints.includes('albums')) || !endpoints) {
const albumsUrl = `http://ws.audioscrobbler.com/2.0/?method=user.gettopalbums&user=cdme_&api_key=${MUSIC_KEY}&limit=8&format=json&period=7day`
albumsJson = await fetch(albumsUrl)
.then((response) => response.json())
.catch((error) => {
console.log(error)
return {}
})
}
// books
if ((endpoints && selectedEndpoints.includes('books')) || !endpoints) {
const booksUrl = `${host}/feeds/books`
booksJson = await extract(booksUrl).catch((error) => {
console.log(error)
return {}
})
}
// movies
if ((endpoints && selectedEndpoints.includes('movies')) || !endpoints) {
const moviesUrl = `${host}/feeds/movies`
moviesJson = await extract(moviesUrl).catch((error) => {
console.log(error)
return {}
})
moviesJson.entries = moviesJson.entries.splice(0, 5)
}
// tv
if ((endpoints && selectedEndpoints.includes('tv')) || !endpoints) {
const tvUrl = `${host}/feeds/tv?slurm=${TV_KEY}`
tvJson = await extract(tvUrl).catch((error) => {
console.log(error)
return {}
})
tvJson.entries = tvJson.entries.splice(0, 5)
}
// current track
if ((endpoints && selectedEndpoints.includes('currentTrack')) || !endpoints) {
const currentTrackUrl = `http://ws.audioscrobbler.com/2.0/?method=user.getrecenttracks&user=cdme_&api_key=${MUSIC_KEY}&limit=1&format=json&period=7day`
currentTrackJson = await fetch(currentTrackUrl)
.then((response) => response.json())
.catch((error) => {
console.log(error)
return {}
})
}
const res: {
status?: Status
artists?: Artists
albums?: Albums
books?: TransformedRss
movies?: TransformedRss
tv?: TransformedRss
currentTrack?: Tracks
} = {}
if (statusJson) res.status = statusJson.response.statuses.splice(0, 1)[0]
if (artistsJson) res.artists = artistsJson?.topartists.artist
if (albumsJson) res.albums = albumsJson?.topalbums.album
if (booksJson) res.books = booksJson?.entries
if (moviesJson) res.movies = moviesJson?.entries
if (tvJson) res.tv = tvJson?.entries
if (currentTrackJson) res.currentTrack = currentTrackJson?.recenttracks?.track?.[0]
// unified response
return res
}
```
The individual media components of the now page are simple and presentational, for example, `Albums.tsx`:
```jsx
import Cover from '@/components/media/display/Cover'
import { Spin } from '@/components/Loading'
import { Album } from '@/types/api'
const Albums = (props: { albums: Album[] }) => {
const { albums } = props
if (!albums) return <Spin className="my-12 flex justify-center" />
return (
<>
<h3 className="pt-4 pb-4 text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 sm:text-2xl sm:leading-10 md:text-4xl md:leading-14">
Listening: albums
</h3>
<div className="grid grid-cols-2 gap-2 md:grid-cols-4">
{albums?.map((album) => (
<Cover key={album.mbid} media={album} type="album" />
))}
</div>
</>
)
}
export default Albums
```
This component and `Artists.tsx` leverage `Cover.tsx`, which renders music related elements:
```tsx
import { Media } from '@/types/api'
import ImageWithFallback from '@/components/ImageWithFallback'
import Link from 'next/link'
import { ALBUM_DENYLIST } from '@/utils/constants'
const Cover = (props: { media: Media; type: 'artist' | 'album' }) => {
const { media, type } = props
const image = (media: Media) => {
let img = ''
if (type === 'album')
img = !ALBUM_DENYLIST.includes(media.name.replace(/\s+/g, '-').toLowerCase())
? media.image[media.image.length - 1]['#text']
: `/media/artists/${media.name.replace(/\s+/g, '-').toLowerCase()}.jpg`
if (type === 'artist')
img = `/media/artists/${media.name.replace(/\s+/g, '-').toLowerCase()}.jpg`
return img
}
return (
<Link
className="text-primary-500 hover:text-primary-600 dark:hover:text-primary-400"
href={media.url}
target="_blank"
rel="noopener noreferrer"
title={media.name}
>
<div className="relative">
<div className="absolute left-0 top-0 h-full w-full rounded-lg border border-primary-500 bg-cover-gradient dark:border-gray-500"></div>
<div className="absolute left-1 bottom-2 drop-shadow-md">
<div className="px-1 text-xs font-bold text-white">{media.name}</div>
<div className="px-1 text-xs text-white">
{type === 'album' ? media.artist.name : `${media.playcount} plays`}
</div>
</div>
<ImageWithFallback
src={image(media)}
alt={media.name}
className="rounded-lg"
width="350"
height="350"
/>
</div>
</Link>
)
}
export default Cover
```
All of the components for this page [can be viewed on GitHub](https://github.com/cdransf/coryd.dev/tree/main/components/media). Each one consumes an object from the `loadNowData` object and renders it to the page. The page is also periodically revalidated via an api route that simply calls this same method:
```ts
import loadNowData from '@/lib/now'
export default async function handler(req, res) {
res.setHeader('Cache-Control', 's-maxage=3600, stale-while-revalidate')
const endpoints = req.query.endpoints
const response = await loadNowData(endpoints)
res.json(response)
}
```
And, with all of that in place, we have a lightly trafficked page that updates itself (with a few exceptions) as I go about my habits of using Last.fm, Trakt, Letterboxd, Oku and so forth.
[^1]: I know about GraphQL, but we're just going to deal with plain old fetch calls here.
[^2]: It's also leveraged on the index view of my site to fetch my status, currently playing track and the books I'm currently reading.

View file

@ -0,0 +1,13 @@
---
title: Clearing mod_pagespeed cache
date: '2017-02-20'
draft: false
tags: ['apache', 'development']
summary: I use mod_pagespeed on this server to help speed up asset delivery and force optimization best practices across all of the sites I host.
---
I use [mod_pagespeed](https://github.com/pagespeed/mod_pagespeed) on this server to help speed up asset delivery and force optimization best practices across all of the sites I host.<!-- excerpt --> Occasionally, during deployments, it's helpful to clear the module cache. Doing so is as simple as the following:
```bash
touch /var/cache/mod_pagespeed/cache.flush
```

View file

@ -0,0 +1,211 @@
---
title: 'Adding client side webmentions to my Next.js blog'
date: 2023-02-18
draft: false
tags: ['nextjs', 'react', 'web development', 'webmentions', 'indie web']
summary: 'A quick rundown of the steps I took to add webmentions to my Next.js blog.'
---
The latest iteration of my website is built on [Next.js](https://nextjs.org), specifically [Timothy Lin](https://github.com/timlrx)'s wonderful [Tailwind/Next.js starter blog.](https://github.com/timlrx/tailwind-nextjs-starter-blog).<!-- excerpt --> I've modified it quite a bit, altering the color scheme, dropping components like analytics, comments and a few others while also building out some new pages (like my [now page](https://coryd.dev/now)). As part of this process I wanted to add support for webmentions to the template, integrating mentions from Mastodon, Medium.com and other available sources.
To kick this off you'll need to log in and establish an account with [webmention.io](https://webmention.io) and [Bridgy](https://brid.gy). The former provides you with a pair of meta tags that collect webmentions, the latter connects your site to social media[^1]
Once you've added the appropriate tags from webmention.io, connected your desired accounts to Bridgy and received some mentions on these sites, you should be able to access said mentions via their API. For my purposes (and yours should you choose to take the same approach), this looks like the following Next.js API route:
```typescript
import loadWebmentions from '@/lib/webmentions'
export default async function handler(req, res) {
const target = req.query.target
const response = await loadWebmentions(target)
res.json(response)
}
```
You can see my mentions at the live route [here](https://coryd.dev/api/webmentions).
I've elected to render mentions of my posts (boosts, in Mastodon's parlance), likes and comments. For boosts, I'm rendering the count, for likes I render the avatar and for mentions I render the comment in full. The component that handles this looks like the following:
```jsx
import siteMetadata from '@/data/siteMetadata'
import { Heart, Rocket } from '@/components/icons'
import { Spin } from '@/components/Loading'
import { useRouter } from 'next/router'
import { useJson } from '@/hooks/useJson'
import Link from 'next/link'
import Image from 'next/image'
import { formatDate } from '@/utils/formatters'
const WebmentionsCore = () => {
const { asPath } = useRouter()
const { response, error } = useJson(`/api/webmentions?target=${siteMetadata.siteUrl}${asPath}`)
const webmentions = response?.children
const hasLikes =
webmentions?.filter((mention) => mention['wm-property'] === 'like-of').length > 0
const hasComments =
webmentions?.filter((mention) => mention['wm-property'] === 'in-reply-to').length > 0
const boostsCount = webmentions?.filter(
(mention) =>
mention['wm-property'] === 'repost-of' || mention['wm-property'] === 'mention-of'
).length
const hasBoosts = boostsCount > 0
const hasMention = hasLikes || hasComments || hasBoosts
if (error) return null
if (!response) return <Spin className="my-2 flex justify-center" />
const Boosts = () => {
return (
<div className="flex flex-row items-center">
<div className="mr-2 h-5 w-5">
<Rocket />
</div>
{` `}
<span className="text-sm">{boostsCount}</span>
</div>
)
}
const Likes = () => (
<>
<div className="flex flex-row items-center">
<div className="mr-2 h-5 w-5">
<Heart />
</div>
<ul className="ml-2 flex flex-row">
{webmentions?.map((mention) => {
if (mention['wm-property'] === 'like-of')
return (
<li key={mention['wm-id']} className="-ml-2">
<Link
href={mention.url}
target="_blank"
rel="noopener noreferrer"
>
<Image
className="h-10 w-10 rounded-full border border-primary-500 dark:border-gray-500"
src={mention.author.photo}
alt={mention.author.name}
width="40"
height="40"
/>
</Link>
</li>
)
})}
</ul>
</div>
</>
)
const Comments = () => {
return (
<>
{webmentions?.map((mention) => {
if (mention['wm-property'] === 'in-reply-to') {
return (
<Link
className="border-bottom flex flex-row items-center border-gray-100 pb-4"
key={mention['wm-id']}
href={mention.url}
target="_blank"
rel="noopener noreferrer"
>
<Image
className="h-12 w-12 rounded-full border border-primary-500 dark:border-gray-500"
src={mention.author.photo}
alt={mention.author.name}
width="48"
height="48"
/>
<div className="ml-3">
<p className="text-sm">{mention.content?.text}</p>
<p className="mt-1 text-xs">{formatDate(mention.published)}</p>
</div>
</Link>
)
}
})}
</>
)
}
return (
<>
{hasMention ? (
<div className="text-gray-500 dark:text-gray-100">
<h4 className="pt-3 text-xl font-extrabold leading-9 tracking-tight text-gray-900 dark:text-gray-100 md:text-2xl md:leading-10 ">
Webmentions
</h4>
{hasBoosts ? (
<div className="pt-2 pb-4">
<Boosts />
</div>
) : null}
{hasLikes ? (
<div className="pt-2 pb-4">
<Likes />
</div>
) : null}
{hasComments ? (
<div className="pt-2 pb-4">
<Comments />
</div>
) : null}
</div>
) : null}
</>
)
}
export default WebmentionsCore
```
We derive the post URL from the fixed site URL in my site metadata, the URI from Next.js' router, concatenate them and pass them as the API path to my `useJson` hook, which wraps `useSWR`[^2]:
```typescript
import { useEffect, useState } from 'react'
import useSWR from 'swr'
export const useJson = (url: string, props?: any) => {
const [response, setResponse] = useState<any>({})
const fetcher = (url: string) =>
fetch(url)
.then((res) => res.json())
.catch()
const { data, error } = useSWR(url, fetcher, { fallbackData: props, refreshInterval: 30000 })
useEffect(() => {
setResponse(data)
}, [data, setResponse])
return {
response,
error,
}
}
```
The `target` param narrows the returned mentions to those pertinent to the current post. Once we've received the appropriate response from the service, we evaluate the data to determine what types of mentions we have, construct JSX components to display them and conditionally render them based on the presence of the appropriate mention data.
The `WebmentionsCore` component is dynamically loaded into each post using the following parent component:
```jsx
import dynamic from 'next/dynamic'
import { Spin } from '@/components/Loading'
const Webmentions = dynamic(() => import('@/components/webmentions/WebmentionsCore'), {
ssr: false,
loading: () => <Spin className="my-2 flex justify-center" />,
})
export default Webmentions
```
The final display looks like this:
<img src="https://files.coryd.dev/v/NG8lHj24OsJilx7QuxWO+" alt="Example webmentions" styles="width:100%;height:auto;margin:.5em 0" />
[^1]: For my purposes, social media is GitHub, Mastodon and Medium. I've used the rest at various points and no longer have an interest in them for myriad reasons.
[^2]: I've discussed this all a bit more in [this post](https://coryd.dev/blog/simple-api-fetch-hooks-with-swr).

View file

@ -0,0 +1,35 @@
---
date: '2021-04-01'
title: 'Digital privacy tools'
draft: false
tags: ['tech', 'privacy']
summary: 'This is a helpful, albeit basic, guide to online privacy tools. In addition to the tools cited, I would recommend the following.'
---
**[The New York Times:](https://www.nytimes.com/2021/03/28/style/tools-protect-your-digital-privacy.html)**
> Everything you do online — from browsing to shopping to using social networks — is tracked, typically as behavioral or advertising data. But browser extensions are simple, generally free add-ons that you can use to slow down or break this type of data collection, without completely ruining your experience of using the internet.
This is a helpful, albeit basic, guide to online privacy tools.<!-- excerpt --> In addition to the tools cited, I would recommend the following:
**Private email providers**
- [Fastmail](https://fastmail.com)
- [mailbox.org](mailbox.org)
- [Proton Mail](http://protonmail.com)
Ubiquitous free email providers profit by mining user data (whether humans are involved or not). Your inbox acts as a key to your digital life and you should avoid using any provider that monetizes its contents.
**Adblockers**
- [1Blocker](https://1blocker.com)
- [Better](https://better.fyi)
These are both light-weight, independently developed ad and tracker blockers. 1Blocker is considerably more configurable, but could be daunting to new users (the defaults offer a nice balance though).
**DNS providers**
- [nextDNS](https://nextdns.io)
- [Cloudflare 1.1.1.1](https://www.cloudflare.com/learning/dns/what-is-1.1.1.1)
I use nextDNS on my home network for basic security and have a more restrictive configuration that heavily filters ads at the DNS level on specific devices. Cloudflare's 1.1.1.1 service doesn't offer the same features, but is still preferable to Google's offering or your ISP's default.

View file

@ -0,0 +1,308 @@
---
date: '2023-02-17'
title: 'Workflows: handling inbound email on Fastmail with regular expressions (now featuring ChatGPT)'
draft: false
tags: ['email', 'fastmail', 'regular expressions', 'workflows', 'chatgpt']
summary: "I've been using Fastmail for years now and have explored a number of different approaches to handling mail."
---
I've been using Fastmail for years now and have explored a number of different approaches to handling mail. I've approached it by creating rules targeting lists of top level domains, I've gone with no rules at all and a heavy-handed approach to unsubscribing from messages (operating under the idea that _everything_ warrants being seen and triaged) and I've even used HEY [^1].<!-- excerpt -->
For now, I've approached filtering my mail by applying regular expressions to reasonably broad categories of incoming mail[^2]. My thinking with this approach is that will scale better over the long term by applying heuristics to common phrases and patterns in incoming mail without the need to apply rules to senders on a per address or domain basis.
<img src="https://files.coryd.dev/j/Jd6NQcAVD3oU4gkgZMpD+" alt="A diagram of my Fastmail workflow" styles="width:100%;height:auto;margin:.5em 0" />
## Alias-specific rules
I have four aliases that I regularly provide to different services. One is for newsletters and routes them to [Readwise's Reader app](https://readwise.io/read), another routes directly to my saved articles in the same app, another routes different messages to my [Things](https://culturedcode.com/things/) inbox and a final one serves as the recovery email on my grandfather's accounts (in the event anything goes awry).
These work by checking that the `To/CC/BCC` matches the appropriate alias before filing them off to `Archive/Newsletters`, `Archive/Saves` or `Notifications`, respectively. These folders are configured to auto-purge their contents at regular intervals as they are typically consumed in the context of the application that they're forwarded to (and are only filed into folders for reference in the event something goes wrong in said applications).
## A quick failsafe
In the event I've failed to tune a regular expression properly or an actual person triggers a match I have a rule that is executed after the aforementioned alias-specific rules that stops all rule evaluations for _any_ address in my contacts.
**Update:** I've run every regular expression and glob pattern I apply to my messages through ChatGPT to see if it could simplify, combine and otherwise improve them (namely reducing false positives). This has worked quite well (outside of the time required to coax ChatGPT to the best possible answer). Further, my deliveries rule that forwards to Parcel now also requires both a subject and body match before forwarding.
[I also have a rule containing regular expressions that also skips evaluations for login pin codes, meeting/appointment reminders and common security notices](https://pastes.coryd.dev/mail-regexes-alerts/markup).
```json
{
"conditions": [
{
"lookHow": "regexp",
"lookFor": "(?i)\\b(PIN|Verify|Verification|Confirm|One-Time|Single(-|\\s)Use)\\b.*?(passcode|number|code.*$)",
"lookIn": "subject"
},
{
"lookHow": "regexp",
"lookFor": "(?i)^.*upcoming (appointment|visit).*",
"lookIn": "subject"
},
{
"lookFor": "(?i)^.*new.*(sign(in|-in|ed)|(log(in|-in|ged)))",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookFor": "(?i)^.*(meeting|visit|appointment|event).*\\b(reminder|notification)",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookFor": "(?i)^.*verify.*(device|email|phone)",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookHow": "regexp",
"lookFor": "(?i)^.*Apple.*(ID was used to sign in)",
"lookIn": "subject"
},
{
"lookFor": "(?i)^.*(computer|phone|device).*(added)",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookHow": "regexp",
"lookFor": "(?i)^2FA.*(turned on)",
"lookIn": "subject"
},
{
"lookIn": "subject",
"lookFor": "(?i)^.*confirm.*(you)",
"lookHow": "regexp"
},
{
"lookFor": "(?i)^.*you.*((log|sign)\\s?-?\\s?in).*$",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookHow": "is",
"lookFor": "notifications@savvycal.com",
"lookIn": "fromEmail"
},
{
"lookIn": "subject",
"lookFor": "\\b(?:RSVP|invitation|event|attend)\\b",
"lookHow": "regexp"
}
```
## Mapping categories as folders
I've tailored these rules to align with folders on a per topic basis. I have a broad `Financial` folder for things like receipts, bank statements and bills. That folder contains a few granular subfolders like `Deliveries`, `Media`, `Medical`, `Promotions` and so forth. All multi-step rules are set to filter messages when `any` of the tabled criteria matches.
The top level `Financial` rule [looks like this](https://pastes.coryd.dev/mail-regexes-financial/markup).
```json
"conditions": [
{
"lookFor": "([Ee]quifax.*$|[Ee]xperian.*$|[Tt]ransunion.*$|[Aa]mazon[Kk]ids.*$|[Vv]isa[Pp]repaid[Pp]rocessing.*$|americanexpress.*$|paddle.*$|instacart.*$|^.*discover.*$|^.*aaa.*$)",
"lookIn": "fromEmail",
"lookHow": "regexp"
},
{
"lookFor": "([Gg]andi.*$|[Hh]over.*$|[Tt]ucows.*$|[Gg]o[Dd]addy.*$|[Nn]ame[Cc]heap.*$|[Vv]enmo.*$|[Pp]ay[Pp]al.*$|[Aa][Cc][Ii]payonline.*$|[Uu]se[Ff]athom.*$)",
"lookIn": "fromEmail",
"lookHow": "regexp"
},
{
"lookHow": "regexp",
"lookFor": "(?i)you(?:r)?[\\s-]*(?:pre[\\s-]?order|pre[\\s-]?order(?:ed))",
"lookIn": "body"
},
{
"lookIn": "toCcBccName",
"lookFor": "*[Aa][Pp][Pp][Ll][Ee] [Cc][Aa][Rr][Dd]*[Ss][Uu][Pp][Pp][Oo][Rr][Tt]*",
"lookHow": "glob"
},
{
"lookHow": "regexp",
"lookIn": "subject",
"lookFor": "\\b(?i)(receipt|bill|invoice|transaction|statement|payment|order|subscription|authorized|booking|renew(al|ing)?|expir(e|ed|ing)?|deposit|withdrawl|purchased)\\b.*"
},
{
"lookFor": "(?i)\\b(receipt|bill|invoice|transaction|statement|payment|order|subscription|authorized|booking|renew(al|ing)?|expir(e|ed|ing)?|deposit|withdrawl|purchased|(itunes|apple) store|credit (score|report)|manage (account|loan))\\b.*",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookHow": "regexp",
"lookFor": "(?i)\\b(gift (card|certificate)|zelle|new plan|autopay|reward certificate)\\b.*",
"lookIn": "subject"
}
],
```
`Deliveries` follow a similar pattern with rule sets intended to capture messages with package tracking information or other details. I kickstarted this rule by, naturally, referencing [this answer from StackOverflow](https://stackoverflow.com/a/5024011).
All of the regular expressions contained in this answer are matched against the `Body` of inbound messages before being forwarded to [Parcel Email](https://parcelapp.net/help/parcel-email.html)[^3]. These rules are supplemented by a few edge case rules targeted at the `Subject` field:
```json
"conditions": [
{
"lookHow": "regexp",
"lookIn": "body",
"lookFor": "\\b(?:1Z[\\dA-Z]{16}|[\\d]{20}|[\\d]{22}|[\\d]{26}|[\\d]{15}|E\\D{1}[\\d]{9}|[\\d]{9}[ ]?[\\d]{4})\\b"
},
{
"lookIn": "subject",
"lookHow": "regexp",
"lookFor": "^.*[Aa] shipment (from|to).*([Ww]as|[Hh]as|is on the way).*?$"
}
],
```
Finally, I have a rule intended to catch anything that falls through the cracks[^4]:
```json
"conditions": [
{
"lookFor": "usps|fedex|narvar|shipment-tracking|getconvey",
"lookHow": "regexp",
"lookIn": "fromEmail"
},
{
"lookFor": "?(ed*x delivery manager|*ed*x.com|tracking*updates*)",
"lookHow": "glob",
"lookIn": "fromName"
},
{
"lookFor": "(?i)^.*package (has been?|was) delivered.*$",
"lookHow": "regexp",
"lookIn": "subject"
}
],
```
My `medical` and `media` rules follow a basic pattern that could be approximated using a per-line sender TLD match[^5]:
```json
"conditions": [
{
"lookFor": "^(?i:Disneyplus.*$|Netflix.*$|^.*hulu.*$|HBOmax.*$|MoviesAnywhere.*$|iTunes.*$|7digital.*$|Bandcamp.*$|Roku.*$|Plex.*$|Peacock.*$)",
"lookHow": "regexp",
"lookIn": "fromEmail"
}
],
```
I'd recommend paring this down to match whichever `media` and `medical` providers email you.
This pattern of filtering and filing continues for several additional categories.
**Financial/Tickets**
```json
"conditions": [
{
"lookFor": "\\b(?i)(concert|event|show|performance|ticket|admission|venue|registration)\\b",
"lookHow": "regexp",
"lookIn": "subject"
}
],
```
**Travel (non-forwarding)**
```json
"conditions": [
{
"lookHow": "regexp",
"lookFor": "\\b(?i)(hotel|reservation|booking|dining|restaurant|travel)(s)?( |-)?(confirmation|reservations?|bookings?|details)\\b",
"lookIn": "subject"
},
{
"lookFor": "\\b(?i)(uber|lyft|rideshare)(s)?( |-)?(receipt|confirmation|ride summary|your ride with)\\b",
"lookHow": "regexp",
"lookIn": "subject"
}
],
```
**Travel (forwarding)**
These are designed to capture confirmations sent by Southwest and are sent off to [Flighty](https://www.flightyapp.com) before being sorted.
```json
"conditions": [
{
"lookIn": "subject",
"lookHow": "regexp",
"lookFor": "\\b(?i)(flight|confirmation|you're going to).*\\b(reservation|on)\\b"
}
],
```
**Annoying customer support follow-ups**
```json
"conditions": [
{
"lookHow": "glob",
"lookFor": "*customer*?(are|uccess|upport)",
"lookIn": "fromName"
}
],
```
**[Promotional messages (that you haven't unsubscribed from)](https://pastes.coryd.dev/mail-regexes-promotions/markup)**
```json
"conditions": [
{
"lookHow": "regexp",
"lookIn": "fromEmail",
"lookFor": "(^.*store-news.*$|^.*axxess.*$)(\\b.*?|$)"
},
{
"lookFor": "^(?=.*\\b(?i)(final offer|limited time|last chance|black friday|cyber monday|holiday|christmas|free shipping|send (gift|present))\\b).*\\b(?i)(discount|save|\\d+% off|free)\\b",
"lookIn": "subject",
"lookHow": "regexp"
},
{
"lookIn": "body",
"lookFor": "\\b\\d{1,2}(?:\\.\\d+)?% off\\b",
"lookHow": "regexp"
},
{
"lookIn": "subject",
"lookFor": "\\b(?:new|updated|special|limited-time)\\s+(?:offers|deals|discounts|promotions|sales)\\b",
"lookHow": "regexp"
}
],
```
**Social networking messages**
These I've left as a simple list wherein `any` included top level domain is filed away as I don't belong to many social networks and they change fairly infrequently.
**DMARC notifications (depending on how you have your policy record configured)**
```json
"conditions": [
{
"lookIn": "subject",
"lookHow": "regexp",
"lookFor": "((^.*dmarc.*$)(\\b.*?|$))"
},
{
"lookIn": "fromEmail",
"lookHow": "regexp",
"lookFor": "((^.*dmarc.*$)(\\b.*?|$))"
}
],
```
That covers _most_ of what I use to manage my mail (outside of anything particularly personal). I fully expect the regular expressions I'm using could stand to be refined and I plan on continuing to do just that. But, with that said, things have worked better than I expected so far and false positives/miscategorizations have been infrequent.
If you have any questions or suggestions I'm all ears. Feel free to [email me](mailto:hi@coryd.dev) or ping me on [Mastodon]().
[^1]: Before, well, _all that_.
[^2]: Fastmail has some helpful tips on regular expression rules [here](https://www.fastmail.help/hc/en-us/articles/360060591193-Rules-using-regular-expressions)
[^3]: Fun fact, this is, apparently, no longer being actively developed — presumably because email, as we all know, is an absolute pleasure to parse and deal with.
[^4]: This rule doesn't forward over to Parcel as it typically captures secondary notices that either don't contain or duplicate the original tracking info.
[^5]: I know, I called this inefficient earlier.

View file

@ -0,0 +1,23 @@
---
title: 'Fixing Safari iCloud syncing'
date: '2022-05-28'
draft: false
tags: [apple, ios, macos]
summary: "Safari not syncing history, tabs or its landing page? Here's a fix."
---
I've been having an intermittent issue with Safari failing to sync any data via iCloud that you would normally expect — history, tabs, bookmarks and the landing page were all behaving independently despite iCloud syncing being enabled.<!-- excerpt -->
These steps fixed the issue, finally, on my devices:
1. Open a terminal and run `defaults write com.apple.Safari IncludeInternalDebugMenu 1`
2. Quit Safari
3. Open Safari, navigate to the new `Debug` menu and select `Sync iCloud History`
4. Run `defaults write com.apple.Safari IncludeInternalDebugMenu 0` to disable the `Debug` menu[^1]
5. Disable Safari in the iCloud settings of each of your devices
6. Reboot each of your devices
7. Enable Safari in the iCloud settings of each of your devices
Cross your fingers and hope for the best, but sync should settle down and start working again. I'd contend that none of these steps _should_ be necessary, but here we are.
[^1]: Unless you want to keep it.

View file

@ -0,0 +1,93 @@
---
title: Generating a responsive CSS grid using Neat
date: '2016-07-24'
draft: false
tags: ['development', 'css', 'sass']
summary: I use a responsive grid system for this site (and a number of other projects) that's generated by pulling in Thoughtbot's Neat framework.
---
I use a responsive grid system for this site (and a number of other projects) that's generated by pulling in Thoughtbot's [Neat](http://neat.bourbon.io/) framework.<!-- excerpt --> To generate the framework for this grid, I've put together a simple SASS/SCSS mixin that looks like the following:"
```scss
.grid {
&-main-container {
@include outer-container;
}
&-row {
@include row;
@include pad(0 10%);
@media only screen and (max-width: 640px) {
@include pad(0 10%);
}
&.collapse {
@media only screen and (max-width: 640px) {
@include pad(0);
}
}
.grid-row {
// collapse nested grid rows
@include pad(0);
}
}
$grid-columns: 12;
@for $i from 0 through $grid-columns {
&-columns-#{$i} {
@include span-columns($i);
}
&-columns-small-#{$i} {
@include span-columns($i);
@media only screen and (max-width: 640px) {
@include span-columns(12);
}
}
}
@for $i from 0 through $grid-columns {
&-shift-left-#{$i} {
@include shift(-$i);
}
&-shift-right-#{$i} {
@include shift($i);
}
@media only screen and (max-width: 640px) {
&-shift-left-#{$i},
&-shift-right-#{$i} {
@include shift(0);
}
}
}
}
```
To use the grid, simply drop it in as an import after including Neat. Once your SASS/SCSS files have been parsed, you'll end up with completed grid classes that will allow you to generate responsive markup for a page. For example:
```html
<div class="grid-main-container">
<div class="grid-row>
<div class="grid-columns-9">
<!-- Content -->
</div>
<div class="grid-columns-3">
<!-- Content -->
</div>
</div>
<!-- Columns in this row will collapse to the full screen width on small screens -->
<div class="grid-row>
<div class="grid-columns-small-9">
<!-- Content -->
</div>
<div class="grid-columns-small-3">
<!-- Content -->
</div>
</div>
</div>
```

View file

@ -0,0 +1,76 @@
---
title: Leaving Google Apps for Fastmail
date: '2014-01-18'
draft: false
tags: ['email', 'fastmail', 'google']
summary: I recently began a process of re-evaluating the web services I use, the companies that provide them and an evaluation of where I store important data.
---
I recently began a process of re-evaluating the web services I use, the companies that provide them and an evaluation of where I store important data. I had used Google services extensively with Gmail handling my email, my contacts synced through Google contacts, calendars in Google calendar and documents in a Google Drive (I had used Google Reader extensively but switched to a [Fever](http://feedafever.com/ 'Fever Red hot. Well read.') installation following Reader's demise).<!-- excerpt --> While Google's services are world class, it became increasingly clear to me that if was not in my interest to store significant amounts of personal data with a company that has a financial interest in profiting from that information.
I wanted to replace the free services I was using with comparable services from companies whose interests we're aligned with their users (whose users were their customers -- not advertisers) and who had a clear business model (they provide a service their users pay for).[^1]
**Enter Fastmail**
I explored several options for email hosting, with [Rackspace Email](http://www.rackspace.com/email-hosting/webmail/ 'Rackspace Email - Affordable Hosted Email Solution for Small Business'), [Hushmail](https://www.hushmail.com/ 'Hushmail - Free Email with Privacy') and [Hover - email](https://www.hover.com/email 'Hover - domain name and email management made simple') among the services that caught my attention. Ultimately, I landed on [FastMail](https://www.fastmail.com/?STKI=11917049 'FastMail: Fast, reliable email'). Fastmail is a reliable, IMAP email provider with extensive support for custom domains. Fastmail also has strong spam prevention and [flexible server side filtering](https://www.fastmail.com/help/managing_email_advanced_rules.html 'Email Filter Rules - Advanced Rules - Help with sieve').
I began the transition to Fastmail by using [IMAP migration tool](https://www.fastmail.com/help/business_migrate.html 'Migrate existing accounts - Migrate existing accounts'). The migration process itself was relatively quick too (given the volume of email in my account)[^2].
While your email is being migrated you should take the time to [set up the aliases associated with your Fastmail account](https://www.fastmail.com/help/quick_tours_setting_up_domain.html 'Quick Tours - How to Use Your Own Domain'). Rather than being tied to a single email address like Google Apps, Fastmail allows you to use virtual aliases that allow you to use multiple email addresses (and even multiple domains) with the same Fastmail account.
During my switch to Fastmail I also took the time to flatten my email folder structure and associated server-side rules. I used to use umbrella folders/labels with individual subfolders/labels for senders within each category. While migrating to Fastmail I elected to keep only the umbrella categories which has allowed me to filter through broadly related emails that have been grouped together rather than tabbing through endless folders. This means I have less fine-grained control over where individual emails go but the time saved in not having to sort through endless subfolders and associated rules has been worth it.
My next step was updating my DNS records at my domain's registrar and waiting for propagation. Fastmail has [extensive documentation](https://www.fastmail.com/help/domain_management_custom_dns.html 'Own Domains - Custom DNS') on its required settings for custom DNS but, in most cases, you can simply set your MX records to point to Fastmail's servers:
```dns-zone
example.com. IN MX 10 in1-smtp.messagingengine.com
example.com. IN MX 10 in1-smtp.messagingengine.com
```
You can also point your namer servers to Fastmail as follows:
```dns-zone
example.com. NS ns1.messagingengine.com
example.com. NS ns2.messagingengine.com
```
Additionally, you will need to add an SPF record to your domain's DNS records as follows:
```dns-zone
@ TXT "v=spf1 include:spf.messagingengine.com -all"
```
Finally, you will also need to set up DKIM signing for your outgoing email. Fastmail has instructions on the DKIM setup process [on their site](http://blog.fastmail.com/2011/10/12/dkim-signing-outgoing-email-with-from-address-domain/). The general steps they provide are as follows:
1. Login to your FastMail account and go to Options > Virtual Domains (or Manage > Domains for a family/business account).
2. Scroll to the bottom, and youll see a new "DKIM signing keys" section. For each domain you have, youll see a DKIM public key.
3. Login to your DNS provider, and create a new TXT record for each domain listed and use the value in the "Public Key" column as the TXT record data to publish.
**Contacts and calendars**
While Fastmail provides an outstanding email experience, they do not currently support CardDav syncing for contacts ([CalDav support is currently in beta](https://www.fastmail.com/help/quick_tours_setting_up_domain.html 'Quick Tours - How to Use Your Own Domain') ). It is worth noting that Fastmail has an [LDAP](https://www.fastmail.com/help/address_book_ldap_access.html 'Address Book - LDAP Access') server that allows you to store contacts associated with your mail account (with an option to add people you correspond with automatically), but the server is read-only.
For now I'm using iCloud to sync my calendars and contacts and will weigh Fastmail's options for each when full support arrives. I'm currently leaning towards sticking with iCloud rather than adopting Fastmail's solutions.[^3] I didn't, admittedly, explore a host of options for calendar and contact syncing outside of iCloud. I use iCloud for a handful of other things and adopting sync services from yet another party seemed clunky.
**Chat**
Leaving Google Apps also meant leaving Google Hangouts (which I used semi-regularly to communicate with friends and family). Fastmail does offer [XMPP support](https://www.fastmail.com/help/features_chat.html 'Features - Chat Service') for certain accounts which I have used in place of Google Hangouts. How long Google continues to support XMPP and interoperability with Google Hangouts [remains to be seen](http://www.zdnet.com/google-moves-away-from-the-xmpp-open-messaging-standard-7000015918/ 'Google moves away from the XMPP open-messaging standard').
**Fastmail so far**
I've been using Fastmail since the end of November and couldn't be happier with it. The service has been extremely reliable (I haven't noticed a single instance of downtime). It's also been nice to use a traditional IMAP implementation after having used Google's quirky implementation for so long. Fastmail doesn't have the host of services Google provides, but it is a bullet proof email provider that I feel I can trust with my data which was exactly what I was looking to in switching[^4]
**Notes**
I did quite a bit of research before switching to Fastmail and the following posts helped push me to make the move:
- [Switching from Gmail to FastMail / Max Masnick](http://www.maxmasnick.com/2013/07/19/fastmail/ 'Switching from Gmail to FastMail / Max Masnick')
- [From Gmail to FastMail: Moving Away from Google ReadWrite](http://readwrite.com/2012/03/19/from-gmail-to-fastmail-moving#awesm=~othfJ88hm9Tp8X 'From Gmail to FastMail: Moving Away from Google ReadWrite')
- [FastMail is My Favourite Email Provider](http://web.appstorm.net/reviews/email-apps/fastmail-is-my-favourite-email-provider/ 'FastMail is My Favourite Email Provider')
Have you moved to Fastmail? Are you thinking of doing so? [Let me know your thoughts](mailto:coryd@hey.com) on it or the move to it. You can sign up for Fastmail [here](https://www.fastmail.com).
[^1]: My interest in this idea, specifically was sparked by this blog post by Marco Arment: [Let us pay for this service so it wont go down](http://www.marco.org/2011/04/05/let-us-pay-for-this-service-so-it-wont-go-down 'Let us pay for this service so it wont go down Marco.org')
[^2]: I had previously consolidated all of my old email accounts in to my Google Apps account via forwarding and by checking them via IMAP through Gmail.
[^3]: I currently use the first-party mail clients on both iOS and OSX so not having contacts and calendars synced with Fastmail is really only an issue when I the Fastmail web interface (which isn't all that frequently). For now I've been manually uploading vCard files to Fastmail which is clunky, but not all that annoying. I _do_ miss being able to create events by clicking on parsed text (which Google Apps supported), but not all that much.
[^4]: If you do get tripped up switching from another provider, Fastmail does have extensive documentation. [You can also feel free to get in touch](mailto:hi@coryd.dev).

View file

@ -0,0 +1,93 @@
---
title: 'Migrating to Fastmail'
date: '2022-04-13'
draft: false
tags: ['email', 'fastmail', 'gmail']
summary: "So you want to migrate over to Fastmail for your email — here's how you can go about doing so as seamlessly as possible."
---
So you want to migrate over to Fastmail for your email — here's how you can go about doing so as seamlessly as possible.<!-- excerpt -->
I've used (and/or tried) nearly every email service I've heard of and have stuck with Fastmail the longest[^1]. They make onboarding and migrating easy, offer a fast and robust web application, support modern standards and nicely integrate contacts and calendar applications that also support [CardDav](https://en.wikipedia.org/wiki/CardDAV) and [CalDav](https://en.wikipedia.org/wiki/CalDAV) access[^2].
### Kicking things off
Register for an account at [fastmail.com](https://ref.fm/u28939392)[^3] — you'll be run through their lightweight onboarding process which allows you to select an address at a domain they own or use your own. If you use your own, they'll guide you through configuring the DNS records for it, often with registrar specific instructions.
They also offer [extensive documentation](https://www.fastmail.com/help/domain_management_custom_dns.html) on this process and offer a UI that validates that the records you have set are correct. For example, your finalized records would look like the following:
**MX:**
```
example.com. IN MX 10 in1-smtp.messagingengine.com
example.com. IN MX 10 in1-smtp.messagingengine.com
```
**SPF:**
```
@ TXT "v=spf1 include:spf.messagingengine.com -all"
```
**DKIM:**
These will be specific to your domain and can be found and set as follows:
1. Login to your FastMail account and go to Options > Virtual Domains (or Manage > Domains for a family/business account).
2. Scroll to the bottom, and youll see a new "DKIM signing keys" section. For each domain you have, youll see a DKIM public key.
3. Login to your DNS provider, and create a new TXT record for each domain listed and use the value in the "Public Key" column as the TXT record data to publish.
**Bonus points**
- Configure DMARC — Simon Andrews has [an excellent writeup](https://simonandrews.ca/articles/how-to-set-up-spf-dkim-dmarc#dmarc)on how to do this.
- Configure MTA-STS — there's a writeup on that [over at dmarcian](https://dmarcian.com/mta-sts/). It'll entail configuring an additional 3 DNS records and exposing an MTA-STS policy file[^6].
### Importing your email
Fastmail makes importing your email from your current provider painless via their [import tool](https://www.fastmail.com/go/settings/setup). With [detailed documentation](https://www.fastmail.help/hc/en-us/articles/360058753594-Import-your-mail) available in their [help center](https://www.fastmail.help/hc). If you're coming from Gmail or Google Workspace[^4] Fastmail will authenticate via OAuth (with the caveat that you'll need [IMAP enabled](https://support.google.com/mail/answer/7126229?hl=en)) and quickly pull over your email, contacts and calendars. Once the import is done, you can also use the [purge folders tool](http://fastmail.com/go/cleanfolders) to tidy up duplicate messages.
If you still need access to calendars from your own provider, Fastmail can [sync them and manage them](https://www.fastmail.help/hc/en-us/articles/360058752754-How-to-synchronize-a-calendar) from their web interface and then pass them down to your device alongside your dedicated Fastmail calendars.
### Syncing with your devices
First, set up [two-step verification](https://www.fastmail.help/hc/en-us/articles/360058752374-Using-two-step-verification-2FA-) — this should be done with a an authenticator app[^5]. Next, [create a password](https://www.fastmail.help/hc/en-us/articles/360058752854-App-passwords) for each app you'll access the service with — you can provision the permissions for the password to be fairly broad but, in the interest of security, I'd suggest scoping them to each app and the service they need to access (e.g. IMAP/SMTP for your Mail app on each device, CalDAV only for your calendar app on each device etc.). Fastmail has server names and ports required to access each service [outlined here](https://www.fastmail.help/hc/en-us/articles/1500000278342-Server-names-and-ports).
### Next steps
At this point you should have your data migrated, your domain configured and be able to access your account from all of your different apps and devices.
**Spam training**
Hop on over to Fastmail's [folder documentation](https://www.fastmail.help/hc/en-us/articles/1500000280301-Setting-up-and-using-folders) and scroll down to Advanced Options — I've configured custom folders I filter mail I _do_ want to receive into as well as the Archive and Sent system folders to learn messages as **not spam**. You can also flag inbound spam messages that slip through to help train the spam filter applied to your account.
**Filtering**
I would highly recommend creating rulesets to help filter messages that aren't critical out of your inbox. Fastmail's documentation on their mail rules and filters [can be found here](https://www.fastmail.help/hc/en-us/articles/1500000278122-Organizing-your-inbox#rules). I filter messages out of my inbox based on a few broad categories, namely:
- **Updates:** anything sent programmatically and pertinent but not critical (e.g. service or utility notifications and so forth).
- **Financial:** anything from financial institutions. I do this based on the TLD, e.g. `examplebank.com`.
- **Social:** anything from social networks or services. I do this based on the TLD, e.g. `linkedin.com`.
- **Promotions:** anything from a merchant or similar mailing list. I subscribe to a handful but don't want them in my inbox. I use [Fastmail's advanced folder options](https://www.fastmail.help/hc/en-us/articles/1500000280301-Setting-up-and-using-folders) to auto-purge this folder every 60 days.
I also use a few aliases to route mail elsewhere:
- **Deliveries:** anything referencing tracking numbers or shipment status get sent off to [Parcel](https://parcelapp.net).
- **Alerts:** uptime alerts and a few other notifications get sent off to [Things](https://culturedcode.com/things/) to be slotted as actionable tasks to be addressed.
- **Newsletters:** mailing lists get routed off to [Feedbin](https://feedbin.com) to be read (or not).
- **Reports:** I route DMARC/email reports to this folder in the event I need to review them (which is rarely if ever).
All of these particular folders live as children of my Archive folder and are auto-purged at different intervals. They're messages that are useful in the near term but whose utility falls off pretty quickly over time.
**Masked email**
If you're a [1Password](https://1password.com) user you can link your accounts and generate per-service, [masked emails for improved security](https://www.fastmail.help/hc/en-us/articles/4406536368911-Masked-Email). The idea here being that if your primary email is known, it can be used to trigger password resets at different services or leveraged in brute-force attacks, but this is mitigated by using a masked/pseudo-random address for each service.
Did I miss anything? [Email me](mailto:hi@coryd.dev)[^7].
[^1]: As an aside, [mailbox.org](https://mailbox.org) is also quite nice and offers some nice privacy features but isn't _quite_ as polished as Fastmail.
[^2]: Which amounts to seamless syncing with iOS at the system level or via an app like [DAVx](https://play.google.com/store/apps/details?id=at.bitfire.davdroid&hl=en) on Android.
[^3]: This is my referral link you can skip that and go straight to [fastmail.com](https://fastmail.com).
[^4]: This one's aimed at you, free Google Workspace [forced migration](https://www.theverge.com/2022/1/19/22891509/g-suite-legacy-free-google-apps-workspace-upgrade).
[^5]: Think [1Password](https://1password.com) or [Authy](https://authy.com)
[^6]: This site is hosted at Vercel, but I have that policy file in a [categorically uninteresting GitHub repository](https://github.com/cdransf/mta-sts) configured using GitHub pages.
[^7]: At Fastmail, naturally.

View file

@ -0,0 +1,88 @@
---
title: 'Simple data fetching with custom React hooks and SWR'
date: '2022-05-23'
draft: false
tags: [swr, api, fetch, react, nextjs]
summary: "I've implemented a few simple custom hooks for data that wrap SWR to efficiently retrieve and display what I'm currently reading and listening to."
---
My site was scaffolded out using [Timothy Lin](https://github.com/timlrx)'s [tailwind-nextjs-starter-blog](https://github.com/timlrx/tailwind-nextjs-starter-blog) project (which I highly recommend checking out). As part of the build out I wanted to display the books I'm currently reading and the song I most recently listened to, data available from [oku](https://oku.club) and [Last.fm](https://last.fm), respectively[^1]. I've added the display for this data to the top of the home page using a pair of light-weight React components.<!-- excerpt -->
To fetch the data for these components I elected to leverage [vercel/swr](https://github.com/vercel/swr), described as:
> SWR is a React Hooks library for data fetching.
>
> The name "**SWR**" is derived from `stale-while-revalidate`, a cache invalidation strategy popularized by [HTTP RFC 5861](https://tools.ietf.org/html/rfc5861). **SWR** first returns the data from cache (stale), then sends the request (revalidate), and finally comes with the up-to-date data again.
On oku, each collection page contains an RSS feed exposing the data for that page. To retrieve and parse the data for my [reading](https://oku.club/user/cory/collection/reading) collection, I'm leveraging [feed-reader](https://www.npmjs.com/package/feed-reader) and passing it to the `useSWR` hook exposed by `swr`. This looks like the following:
```typescript
import { read } from 'feed-reader'
import { useEffect, useState } from 'react'
import useSWR from 'swr'
export const useRss = (url: string) => {
const [response, setResponse] = useState([])
const fetcher = (url: string) =>
read(url)
.then((res) => res.entries)
.catch()
const { data, error } = useSWR(url, fetcher)
useEffect(() => {
setResponse(data)
}, [data, setResponse])
return {
response,
error,
}
}
```
This is then implemented in a `reading.tsx` component as follows[^2]:
```typescript
const { response, error } = useRss('/books')
```
Similarly, I've implemented a hook to fetch json using, well, `fetch` and that looks like the following:
```typescript
import { useEffect, useState } from 'react'
import useSWR from 'swr'
export const useJson = (url: string) => {
const [response, setResponse] = useState<any>({})
const fetcher = (url: string) =>
fetch(url)
.then((res) => res.json())
.catch()
const { data, error } = useSWR(url, fetcher)
useEffect(() => {
setResponse(data)
}, [data, setResponse])
return {
response,
error,
}
}
```
This is then implemented in a `music.tsx` component as follows[^3]:
```typescript
const { response, error } = useJson('/api/music')
```
The `useJson` hook only supports `GET` requests at this point but, could, with a little effort, be refactored to support parameters passed through to the enclosed `fetch` call. This could be done by updating the interface to accept a `parameters` object that includes the url to be called or by adding a separate, optional `parameters` object. I would lean towards the latter approach as the usage would only become as complex as a specific implementation requires.
Both of these components are visible at [coryd.dev](https://coryd.dev). The loading state is displayed until `response` is valid and `null` is returned in the event an `error` occurs as returned by the hook.
[^1]: For the request to oku, I've configured a rewrite in `next.config.js`; for last.fm I've added a simple `api/music.ts` route that interpolates my private API key stored in my Vercel environment variables.
[^2]: The full `reading.tsx` implementation can be [viewed here](https://github.com/cdransf/coryd.dev/blob/1b33bfdc88bbef27e5916971e5db15aa600299d7/components/media/reading.tsx).
[^3]: The full `music.tsx` implementation can be [viewed here](https://github.com/cdransf/coryd.dev/blob/c2577e08e659ce739ab360f25cf5424c6e3ed922/components/media/music.tsx).

View file

@ -0,0 +1,22 @@
---
title: .ssh directory permissions
date: '2020-11-09'
draft: false
tags: ['ssh', 'development']
summary: I was recently setting up a new, always-on machine that I do occasional dev work. This dev work typically consists of routine maintenance and, a requirement of that, is sshing into and running software updates on manually managed servers (yes, manually managed).
---
I was recently setting up a new, always-on machine that I do occasional dev work. This dev work typically consists of routine maintenance and, a requirement of that, is sshing into and running software updates on manually managed servers (yes, manually managed[^1]).<!-- excerpt -->
I sync my `.ssh` configuration using [mackup](https://github.com/lra/mackup). However, while setting up and then using a key I received a warning that my configured `.ssh` directory permissions were too open. If you ever run into this, the solution is fairly simple[^2]:
```bash
chmod 700 ~/.ssh
chmod 644 ~/.ssh/id_rsa.pub
chmod 600 ~/.ssh/id_rsa
```
Try reconnecting using the key in question and the warning should be resolved.
[^1]: Think small-scale WordPress or one-off projects.
[^2]: Where `id_rsa` is your key name.

View file

@ -0,0 +1,26 @@
---
title: Syncing OSX app preferences and dot files
date: '2015-03-15'
draft: false
tags: ['development', 'macOS']
summary: I've started using a command line tool called mackup to back up and sync many of my dot files and application settings on OS X.
---
I've started using a command line tool called [mackup](https://github.com/lra/mackup) to back up and sync many of my dot files and application settings on OS X.<!-- excerpt -->
You can install the tool via [pip](https://pypi.python.org/pypi/pip) or [homebrew](http://brew.sh). I installed it via homebrew and set it up as follows:
```bash
brew install mackup
mackup backup
```
By default mackup will back up your files to a file named mackup in the root of your Dropbox folder. You can also choose to back your files up to Google Drive or anywhere else on your local drive by creating .mackup.cfg in your user root and setting [options the tool provides](https://github.com/lra/mackup/tree/master/doc).
Now, when you move to a new machine, you simply install the tool and run:
```bash
mackup restore
```
Your settings will be added to the new machine and kept in sync via the storage you chose when setting up mackup.

View file

@ -0,0 +1,14 @@
---
title: Updating to the latest version of git on Ubuntu
date: '2017-08-13'
draft: false
tags: ['development', 'git', 'linux', 'ubuntu']
summary: If you're using git on Ubuntu, the version distributed via apt may not be the newest version of git (I use git to deploy changes on all of the sites I manage).
---
If you're using git on Ubuntu, the version distributed via apt may not be the newest version of git (I use git to deploy changes on all of the sites I manage).<!-- excerpt --> You can install the latest stable version of git provided by the maintainers as follows:
```
sudo add-apt-repository ppa:git-core/ppa
sudo apt-get update
```

View file

@ -11,13 +11,21 @@ eleventyComputed:
{% for post in collections[tag] %}
<div class="py-4 sm:py-10">
<p>
<span class="text-2xl sm:text-4xl font-bold hover:underline"><a href="{{ post.url }}">{{ post.data.title }}</a></span>
</p>
<em>{{ post.date | date: "%m.%d.%Y" }}</em>
<p class="mt-4">{{ post.data.post_excerpt }}...
<span class="hover:underline text-indigo-500"><a href="{{ post.url }}">Read More</a></span>
</p>
<div class="mb-8 border-b border-gray-200 pb-8 dark:border-gray-700">
<a class="no-underline" href="{{ post.url }}"
><h2
class="m-0 text-xl font-black leading-tight tracking-normal dark:text-gray-200 md:text-2xl"
>
{{ post.data.title }}
</h2>
</a>
<div class="mt-2 text-sm">
<em>{{ post.date | date: "%m.%d.%Y" }}</em>
</div>
<p class="mt-4">{{ post.data.post_excerpt }}
</p>
<div class="mt-4 flex items-center justify-between">
<a class="flex-none font-normal no-underline" href="{{ post.url }}">Read more &rarr;</a>
</div>
</div>
{% endfor %}

View file

@ -7,7 +7,7 @@
}
.post-tag {
@apply mt-1 mr-1 inline-block text-sm text-primary-400 hover:text-primary-500 dark:hover:text-primary-300;
@apply mr-1 inline-block text-sm text-primary-400 hover:text-primary-500 dark:hover:text-primary-300;
}
.toggle-light {