PC

GPG agent forwarding in a devcontainer

January 28, 2026

I store my GPG key on a Yubikey and use it to sign all my git commits. I also recently started using devcontainers for my projects. The problem is: GPG agent runs on my Mac, and the container has absolutely no idea it exists. My commits were coming out unsigned like some kind of animal.

VS Code actually has built-in GPG forwarding for devcontainers. I tried it. It didn't work. I tried it again. Still nothing. I read the docs three times. I gave up on it and decided to do it myself.

The idea I landed on is almost disappointingly simple: use socat to expose the host's GPG agent Unix socket as a TCP port, and connect to that port from inside the container. Two socat processes having a nice chat over host.docker.internal.

The big picture

┌──────────────────────┐         ┌──────────────────────────┐
│        Host          │         │        Container         │
│                      │         │                          │
│  GPG Agent           │         │  S.gpg-agent (socket)    │
│    ↓                 │         │    ↑                     │
│  S.gpg-agent (socket)│         │  socat                   │
│    ↓                 │         │    ↑                     │
│  socat → TCP :4444 ──┼────────>│  TCP host.docker.internal│
│                      │         │                          │
│  S.gpg-agent.ssh     │         │  S.gpg-agent.ssh (socket)│
│    ↓                 │         │    ↑                     │
│  socat → TCP :4445 ──┼────────>│  socat                   │
└──────────────────────┘         └──────────────────────────┘

Port 4444 for GPG, port 4445 for SSH (because gpg-agent can also serve SSH keys, which is a nice bonus we'll get for free).

There are three pieces to set up: the host-side proxy, the container-side connection, and the devcontainer lifecycle hooks that orchestrate everything.


Host side: a launchd service

On the Mac, I need socat to permanently listen on those TCP ports and forward connections to the GPG agent sockets. I could run a script manually every time, but I would definitely forget, so instead I wrote a launchd plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>com.devcontainer.gpg-proxy</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/bash</string>
    <string>-c</string>
    <string>/opt/homebrew/bin/socat TCP-LISTEN:4444,reuseaddr,fork UNIX-CONNECT:$HOME/.gnupg/S.gpg-agent &amp; /opt/homebrew/bin/socat TCP-LISTEN:4445,reuseaddr,fork UNIX-CONNECT:$HOME/.gnupg/S.gpg-agent.ssh &amp; wait</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>KeepAlive</key>
  <true/>
  <key>StandardOutPath</key>
  <string>/tmp/gpg-proxy.log</string>
  <key>StandardErrorPath</key>
  <string>/tmp/gpg-proxy.err</string>
</dict>
</plist>

RunAtLoad and KeepAlive mean I never have to think about it again. It starts on login and restarts if it dies. One less thing to forget.

Making sure the proxy is running

The devcontainer spec has an initializeCommand that runs on the host before the container starts. Perfect place to ensure our launchd service is installed and alive:

#!/bin/bash
set -e
 
gpgconf --launch gpg-agent
 
PLIST_NAME="com.devcontainer.gpg-proxy"
PLIST_SRC="$(cd "$(dirname "$0")" && pwd)/com.devcontainer.gpg-proxy.plist"
PLIST_DST="$HOME/Library/LaunchAgents/$PLIST_NAME.plist"
 
# Install or update the plist if it changed
if ! diff -q "$PLIST_SRC" "$PLIST_DST" > /dev/null 2>&1; then
  launchctl bootout "gui/$(id -u)/$PLIST_NAME" 2>/dev/null || true
  cp "$PLIST_SRC" "$PLIST_DST"
fi
 
if ! launchctl print "gui/$(id -u)/$PLIST_NAME" > /dev/null 2>&1; then
  launchctl bootstrap "gui/$(id -u)" "$PLIST_DST"
fi
 
launchctl kickstart -k "gui/$(id -u)/$PLIST_NAME"

This script is idempotent — it only reinstalls the plist if it changed, and it always makes sure the service is kicked alive. The launchctl API is a joy to work with, as always.


Container side: connecting back to the host

Now for the other end. Inside the container, we need socat doing the reverse: listening on Unix sockets and forwarding to the host's TCP ports.

#!/bin/bash
set -e
 
HOST="host.docker.internal"
GPG_PORT=4444
SSH_PORT=4445
GPG_SOCK="$HOME/.gnupg/S.gpg-agent"
SSH_SOCK="$HOME/.gnupg/S.gpg-agent.ssh"
 
# Wait for the host proxy to be reachable
for i in $(seq 1 15); do
  if socat -T1 TCP:"$HOST:$GPG_PORT" /dev/null 2>/dev/null; then
    break
  fi
  sleep 1
done
 
mkdir -p ~/.gnupg && chmod 700 ~/.gnupg
 
# Clean up any stale sockets
pkill -f "UNIX-LISTEN:$GPG_SOCK" 2>/dev/null || true
pkill -f "UNIX-LISTEN:$SSH_SOCK" 2>/dev/null || true
rm -f "$GPG_SOCK" "$SSH_SOCK"
 
# Bridge Unix sockets to the host's TCP proxy
socat UNIX-LISTEN:"$GPG_SOCK",fork TCP:"$HOST:$GPG_PORT" &
socat UNIX-LISTEN:"$SSH_SOCK",fork TCP:"$HOST:$SSH_PORT" &
 
# Quick sanity check
gpg --card-status > /dev/null 2>&1 && echo "GPG agent connected"
ssh-add -L > /dev/null 2>&1 && echo "SSH agent connected"

The retry loop at the top is important — the container can start before the host proxy is fully up, so we give it up to 15 seconds. This script gets copied into the image at /home/dev/.local/bin/gpg-agent-connect and runs as the postStartCommand.

Preparing the Dockerfile

The container needs gnupg and socat, obviously. But it also needs the public key imported so GPG knows what key to expect from the agent. I grab mine from GitHub:

RUN apt-get update && apt-get install -y gnupg socat
 
COPY --chown=$USERNAME:$USERNAME gpg-agent-connect.sh /home/$USERNAME/.local/bin/gpg-agent-connect
RUN chmod +x ~/.local/bin/gpg-agent-connect \
    && mkdir -p ~/.gnupg && chmod 700 ~/.gnupg \
    && curl -fsSL https://github.com/barodeur.gpg | gpg --import \
    && FINGERPRINT=$(gpg --list-keys --with-colons | grep '^fpr' | head -1 | cut -d: -f10) \
    && echo "${FINGERPRINT}:6:" | gpg --import-ownertrust \
    && echo "use-agent" >> ~/.gnupg/gpg.conf \
    && echo "no-autostart" >> ~/.gnupg/gpg-agent.conf

The no-autostart line in gpg-agent.conf is the kind of thing that takes you an hour to figure out. Without it, the container helpfully starts its own GPG agent, which shadows the forwarded one, and nothing works. You stare at logs, question your life choices, and eventually discover this one flag.

Git config

With the agent forwarded, signing is just config:

[user]
  name = Paul Chobert
  email = paul@chobert.fr
  signingkey = 63BF3DA020BD80EA6C1CFAEBABE00992A263A790
 
[commit]
  gpgsign = true

Wiring it up in devcontainer.json

The devcontainer lifecycle hooks tie the whole thing together:

{
  "initializeCommand": ".devcontainer/init-gpg-proxy.sh",
  "postStartCommand": "/home/dev/.local/bin/gpg-agent-connect",
  "postCreateCommand": "bash .devcontainer/post-create.sh"
}

Here's what happens when you open the project:

  1. initializeCommand runs on the host — ensures the socat TCP proxy is up via launchd
  2. Docker Compose starts the containers
  3. postStartCommand runs inside the container — creates the socat bridge back to the host
  4. postCreateCommand runs your usual setup (dependencies, database, etc.)

By step 3, the Yubikey on my desk is transparently accessible from inside the container. git commit signs. git push authenticates over SSH. No passphrase prompts leak outside the normal GPG pinentry flow.

SSH for free

Since I use gpg-agent with enable-ssh-support, the same proxy also forwards my SSH keys. The only extra step is setting SSH_AUTH_SOCK in the shell config:

export SSH_AUTH_SOCK="$HOME/.gnupg/S.gpg-agent.ssh"

That's it. Both commit signing and SSH authentication, forwarded through two layers of socat, working as if the Yubikey was plugged directly into the container. It's the kind of setup that feels like it shouldn't work, but it does, and it's been rock solid.