Skip to content

Portfolio

Projects

I learn best by building things. Most of these projects started as a way to test an idea—a new method, a different stack, an AI capability I wanted to understand by using it, not just reading about it.

They range from funded research to weekend experiments. What they share: each one solved a real problem, and each one taught me something I now use in my work.

Six projects

01

Personal Data

Data Platform

What happens when you GDPR-export your entire digital life and feed it to a structured analysis pipeline? You get 33,000 words of verified self-knowledge and ten data deep dives that reveal patterns you never noticed.

Personal Data is my own data platform. It collects, parses, and structures personal data from GDPR exports (ICA grocery purchases, Waze driving history, Facebook, LinkedIn, ChatGPT, Netflix, Spotify, Amazon Prime Video, Apple, and Claude Code usage). Each export gets its own deep dive with quantified findings.

The profile side has ten modules covering career, education, competencies, personality, writing style, values, network, narrative, personal style, and web design preferences. The writing profile alone is built from 6,907 emails spanning 14 years, with quantified style markers (median sentence length: 10 words, stable across the entire corpus).

A compilation pipeline (Python) generates four output versions from one source: summary, CV, agent-prompt, and pitch. Change a fact in one place, and it propagates everywhere.

The practical value is concrete. AI tools that receive the agent-prompt version produce output that sounds like me, not like a chatbot. CV generators pull from verified data instead of guessing. The profile modules serve as source material for content, applications, and positioning work.

This is behavioral archaeology applied to personal branding. Every claim is traceable to raw data. Every insight links to a specific export, timestamp, or frequency count. The result is a self-knowledge base that is both machine-readable and honest.

02

Sune

Knowledge Management

Most knowledge management systems die within three months. The notes pile up, the structure collapses, and the tool becomes a graveyard. Sune is built to survive that.

The starting point was other people’s good work. Ivo Szapar showed how a structured repo (CLAUDE.md, INDEX.md, dev-docs) turns a codebase into something an AI agent can actually reason about. Dave Killeen and the team behind Dex demonstrated what a personal CRM looks like when it is built for real relationships, not sales pipelines. I learned from both, then built my own version.

Sune is a local-first, AI-augmented personal knowledge management system. Plain Markdown files. Git for version control. No proprietary platform. No vendor lock-in. Everything exportable in five minutes.

The architecture has five layers. Storage: Markdown in Git. Routing: the PARA method (Projects, Areas, Resources, Archive) decides where each note lives. Navigation: hub pages that connect related notes without forcing a rigid hierarchy. Workflows: daily and weekly reviews that keep the system from going stale. Agent layer: ten AI agents, each with a defined output contract, handling summarization, extraction, and classification.

The key design decision is the staging pattern. AI agents never write directly into the knowledge base. They write to a review zone. A human approves or rejects every piece before it enters the system. This keeps AI useful without making it unsupervised.

Ivo and Dave deserve credit for proving that structured personal knowledge systems work. Sune is my implementation of that idea: plain text, version control, and agents that know their boundaries.

03

Company Brain

Client Project

An employee types: “Create a quote for products X, Y, Z for this customer.” The system generates it. Correct products. Correct pricing. Correct format. No template hunting. No copy-paste errors.

This is a company knowledge base built for a client using a repository-based AI approach. The method structures a repository with clear governance (CLAUDE.md defines rules), navigation (INDEX.md maps the terrain), and living documentation (context, plan, tasks, and decisions in dev-docs).

The setup is not complicated. Product catalogs, pricing rules, customer data, and document templates live in structured Markdown files. AI agents read these files and generate outputs (quotes, summaries, reports) following defined workflows called skills and commands.

What makes it work is the organization, not the AI. The same AI models that produce garbage with unstructured input produce accurate, useful output when the knowledge base is well-organized. The governance file tells the system what it can and cannot do. The index tells it where to find things. The dev-docs give it context about recent decisions and current priorities.

The client went from manual quote creation (1–2 hours per quote, frequent errors in product specs and pricing) to automated generation with human review. Now in 15 minutes. The system handles the routine work. Humans handle the exceptions.

The lesson is transferable to any organization sitting on institutional knowledge trapped in people’s heads, shared drives, or email threads. Structure the knowledge first. The automation follows naturally.

04

ANN 2.0

ML Research

In 2016, I led a Vinnova-funded project built on three academic theses about using machine learning to predict truck arrivals at two pulp mills. A team of four ML specialists and I developed a working prediction model. Decent results at the time. What happens when you revisit that work with seven years of ML progress?

ANN 2.0 takes the original codebases and ports them to modern tools. MATLAB replaced with Python 3.11+, XGBoost, SHAP for interpretability, and Optuna for hyperparameter optimization. Using an AI-assisted workflow, I built a prototype that outperformed the original model—built by four data science specialists—by 26%.

Phase 1 results: Mean Absolute Error dropped 26% compared to the original models. The MASE (Mean Absolute Scaled Error) landed at 0.79, below the ship threshold of 0.90 (where anything under 1.0 outperforms naive forecasting).

Phase 2 is waiting on a data export: 1.4 million production events, 261,000 bookings, across 27 sites. The original theses worked with limited academic datasets. The production data will test whether these models hold up at scale and across locations.

The angle for operations teams: if you have old analytical work gathering dust—theses, pilot projects, proof-of-concepts from five years ago—the tools to make them production-ready are dramatically better and cheaper now. The hard part was always the domain expertise. That part does not expire.

05

JobSearch

Product Build

Two hundred job listings ingested. Five applications sent per week. Each one tailored, traceable, and built from an immutable experience bank that prevents fabrication. A full application in under 5 minutes.

JobSearch is a local-first job search system for senior professionals in the Nordics. It has four subsystems: job ingestion (connectors for JobTech API, Greenhouse, and Teamtailor), AI-assisted CV generation, a lightweight CRM with full audit trail, and a macOS desktop UI built with Electron and React.

The architecture is deterministic by design. Every CV statement traces back to a source document. Every application logs which model generated it, which experience entries it pulled from, and what the human changed before sending. The stack runs on Python 3.11+ and Node.js, with Claude, OpenAI, and Gemini SDKs handling different generation tasks.

The philosophy behind it: five precise applications beat fifty generic ones. The system enforces this by making it easy to write targeted applications and hard to spray-and-pray. The CRM tracks every interaction, so nothing falls through the cracks.

This started with a simple observation. After years of consulting, I know how many hours go into creating tailored consultant profiles and CVs for every new gig. I kept seeing people on LinkedIn describe the same frustration—spending weeks on applications, rewriting the same information, getting silence in return. Job searching is painful for most people. I wanted to see if it could be less so. I wrote about the underlying problem in Tinder for Jobs.

06

LobeDrive

Health Tech

Road rage is a health problem hiding in plain sight. Research shows that anger behind the wheel impairs judgment, increases risk-taking, and correlates with elevated cardiovascular stress (Galovski & Blanchard, 2004). Cognitive-behavioral interventions can reduce aggressive driving by 35–50%, but they require a therapist’s office—not the steering wheel (Deffenbacher et al., 2003). LobeDrive brings that intervention into the car, in the moment it matters.

www.lobedrive.com

LobeDrive is a just-in-time driving coach app. Not mindfulness. Not wellness. It delivers micro-interventions (two to four sentences) triggered by a Bluetooth button mounted on the steering wheel. Hands stay on the wheel. Eyes stay on the road.

The intervention library contains 100 scripts across five behavioral categories. Each follows a four-step pattern: acknowledge the irritation, interrupt the escalation, restore a sense of agency, then redirect focus to the driving task. The scripts are grounded in cognitive-behavioral principles, not motivational slogans.

The concept has been developed in consultation with some of the world’s most published researchers on driving anger and aggression, from Australia, the United States, and Europe. Their feedback shaped the intervention design.

The product targets a gap that existing solutions ignore. Driver safety tools focus on fleet monitoring (cameras, telematics, scoring). Wellness apps focus on meditation. Nothing addresses the specific moment when a driver’s emotional state starts degrading their decision-making. LobeDrive sits in that gap: evidence-informed, real-time, and designed for the 30 seconds where intervention actually matters.