Fair Seas · Sprint 1 · v2

1 Hour AI-Accelerated
Design Sprint

How do I orchestrate a high-speed design sprint utilizing AI and deliver a useful outcome?

AI Sprint 1 Hour Self-Directed Process Retrospective 6 AI Tools

Including the design judgment calls I caught that the AI missed: military surveillance conventions and bias baked into the prototype data.

I want to build a repeatable process, not a one-off experiment

v1 compressed a typical 5-day design sprint down to a 1 hour solo activity.

1. Define a Big Problem
Identify the core problem to solve.
2. Research Problem Space
Gather context and existing data to ground the sprint.
3. Brainstorm Solutions
Ideate rapidly with AI chatbots for diverse options.
4. Make
High-fidelity AI prototyping using context defined.
5. Test / Tweak
Review outputs, iterate.

1. Define a Big Problem

I ran 3 AI assistants simultaneously to research the problem space - Claude, ChatGPT, and Gemini. I evaluated outputs in parallel, feeding them back into each other for cross-checking and narrowing.

5 minutes was plenty of time to figure out the problem to solve. Illegal deep sea fishing enforcement is difficult and impactful: there's too much data to sift through (i.e. Satellite SAR) and packaging the evidence is arduous. That's a perfect opportunity for AI-powered software to solve.

Gemini output screenshot

Not only came up with 3 great problems to solve, it synthesized the information into "2026 Tech State" and software opportunities.

Won: Best brainstorming partnerGreat synthesis
Claude output screenshot

It gave similar ideas as other platforms, but there was too much to dig through. It wasn't synthesized as nicely.

Gave source linksDistracting emoji icons
ChatGPT output screenshot

It was the only tool to show images with each problem, which helped my eyes. And it called out a personal note - that illegal fishing "aligns very well with your regulated systems brain."

Used memory contextShowed photos/imagesToo lengthy

2. Research Problem Space + 3. Brainstorm Solutions

What I thought would be distinct research & brainstorm phases ended up happening at the same time. As I asked questions about the problem, the LLMs generated users, workflows, solution ideas without my always asking.

I wanted to start with a structured deliverable to act as a high level definition. I prompted with a short template to guide the research.

Deliverable 1: Elevator Pitch
The Elevator Pitch

🌊 Save the Ocean by Turning Satellite Data into Enforceable Evidence

Every year, roughly 1 in 5 wild-caught fish is taken illegally — generating up to $36 billion in illicit profits while accelerating ecosystem collapse and eroding a key natural defense against global warming. The vessels responsible don't need sophisticated tools to hide. They simply turn off their tracking transponders and "go dark."

Satellites can often see ships at sea — but seeing isn't the same as proving wrongdoing. Enforcement depends on structured, defensible evidence.

We're building an AI-powered Maritime Evidence Engine that detects dark-vessel behavior using satellite and vessel-pattern analysis, then automatically transforms that data into enforcement-ready evidence packs. Instead of just surfacing alerts, the platform delivers explainable, auditable reports that regulators and seafood buyers can use to deny market access, escalate inspections, and block illegal catch before it enters the supply chain.

We're not just watching the ocean. We're enforcing its laws.

Skipped

Process gap: v1 had no step to define the buyer. Without it, user and workflow were built on an assumption.

Fact checking: 1 in 5 and $36B unverified. Proceeded at risk.

After aligning all three tools on the elevator pitch, I used the AI tools to identify the core user & hero workflow.

Deliverable 2: Problem
The Problem

Illegal fishing + the evidence gap

Satellites can detect dark vessels. But detection isn't enforcement. Authorities need structured, defensible evidence — AIS gap analysis, SAR cross-reference, zone violations, vessel history — assembled fast enough to act.

The platform must: Collapse the "glitch, buoy, or bust" question from a 20-minute manual investigation into a 90-second confident decision. Speed to confident decision — not just detection.

Skipped

Fact checking: 20 min to 90 sec unverified. Proceeded at risk.

Deliverable 3: User
Maritime Intelligence Analyst
The User

Maritime Intelligence Analyst

Analysts at ocean NGOs, fisheries monitoring centers, coast guard units, and supply chain compliance officers at seafood retailers.

Core pain: Making defensible enforcement decisions with data not built for legal action. Bad escalations damage credibility — so they under-escalate.

More Info
Data volume with no triage
Thousands of anomalies surface daily. Everything looks equally urgent — or equally ignorable.
Explainability gap
They can see something looks wrong but can't articulate why in terms a regulator will accept. "The AI flagged it" isn't defensible.
Jurisdiction confusion
A vessel going dark in international waters triggers different rules than one in an EEZ. They're manually cross-referencing legal frameworks mid-analysis.
Tool fragmentation
AIS in one platform. SAR imagery in another. Port records elsewhere. Assembling a coherent picture means copy-pasting between systems — invisible, unrepeatable, legally fragile.
False positive fatigue
Escalating a bad flag to authorities damages credibility. So they sit on findings, waiting for certainty that never arrives.
Skipped

Process gap: no buyer defined meant the user choice was a guess. Risk noted above.

Retroactive: Persona icon and synthesis were done post-sprint, not during.

Deliverable 4: Workflow
The Workflow Chosen — Prototype Target

The "First Look" — triage in 90 seconds

1
The Trigger
An amber flash on their map. A detection flagged for review.
2
The Context Check
AIS silence — a track that stops while the vessel is still moving.
3
The Correlation
Overlay SAR. If a metallic signature is moving where AIS isn't broadcasting — heart rate goes up.
4
The Known Offender Check
Has this vessel been flagged before? Repeat behavior changes everything.
5
The Decision
Escalate with a timestamped reason, or Dismiss with logged justification — protecting the analyst if the vessel resurfaces.
Skipped

Process gap: v1 had no step to validate workflow choice against the 80% use case. Proceeded at risk.

4. Make + 5. Tweak/Test

The most fun: 7 prototypes in 30 min — across 6 tools

Each tool had varying amounts of context based on who had access to above conversations, which helps get variety in prototypes.

I thought Make and Tweak/Test would be two distinct sprint phases, but because each tool needed time to "think" it made sense to tweak ones while I waited for others to finish.

No winner selected. There were pros/cons to each and it'd be best to validate with users and more research.

Deliverable 5: Prototypes
7 Prototypes · 6 Tools
Claude VS Code — Project Triton Dashboard
Claude · VS Code
Map + evidence focus. No alert list.
Claude Browser — Dark Fleet
Claude · Browser v1
Dark Fleet command center. Tiny, dim fonts - poor readability.
Claude Browser — Watchstander
Claude · Browser v2
Accessibility iteration, still hard to read.
Figma Make — Save The Ocean
Figma Make
Easiest to read. Confusing map & collapsible lists are a pain for this context.
Gemini — Triton Command Dashboard
Gemini
Triton Command had more approachable colors.
Lovable — Project Triton
Lovable
Project Triton had the most cinematic sci-fi look but used biasing nationality signals (see below analysis).
Claude Code Desktop — Maritime Dispatcher Triage Dashboard
Claude · Desktop
Dispatcher triage view had an interesting action bar at bottom.
Skipped

Human Judgement Corrections — too many to fix mid-sprint. See below.

Human Judgement

AI moves fast. It also fills gaps with whatever conventions it knows. My job was to notice when those conventions were wrong for this context.

What I evaluated across the prototypes

Information hierarchy, accessibility, tone, and example data. All had pros and cons, so there's no clear "winner". Figma Make was the friendliest. Lovable was the most cinematic.

Tone: What AI defaulted to

Three of seven prototypes had targeting reticles on the map, which I hadn't asked for. The dark theme was partly my own doing (starting prompts), but the military surveillance conventions crept in on their own.

Unprompted. 3 of 7 prototypes.
Targeting reticle from AI-generated prototype

Example Data: What AI got wrong

Flags and real vessel names introduced political bias I wasn't comfortable with.

I caught it mid-sprint but didn't have time to resolve it all. Post-sprint I fixed the below example data. I also added tone and example data review to my v2 design sprint process.

Before — AI-generated prototype
Before: AI-generated prototype showing country flags and vessel names
CHANG XING 7🇨🇳 CHN
LUCKY FORTUNE🇹🇼 TWN
After — Human-corrected data
After: Human-corrected prototype with neutral vessel identifiers
FV-2041 – PACIFIC MARLINflag removed
FV-3892flag removed

Design correction

Vessel name nationality cues were replaced with neutral vessel identifiers to avoid unintended geopolitical implications in prototype data.

I could have changed the flags to be fictional, however post-sprint I did enough analysis to understand there is no user benefit to having a flag shown. It's just noise. An efficient compliance system prioritizes behavioral signals and evidence confidence.

Targeting reticles are being removed from map components, as unprompted military conventions that don't belong in a compliance tool.

Updating the Design Sprint Process (v1 → v2)

Learnings from v1 of my Design Sprint Process

Design Sprint Process v1

1. Define a Big Problem
Identify the core problem to solve.
2. Research Problem Space
Gather context and existing data to ground the sprint.
3. Brainstorm Solutions
Ideate rapidly with AI chatbots for diverse options.
4. Make
High-fidelity AI prototyping using context defined.
5. Test / Tweak
Review outputs, iterate.
Pain Points:
  • No business model defined: Who pays for Fair Seas? v1 never surfaced this. Without a customer definition the user and workflow were ill defined.
  • No competitive research: What exists today? Post-sprint I found there are existing solutions which could have helped me strengthen a competitive advantage.
  • No end artifact defined: I had to do a lot of retroactive artifact gathering after the sprint. Next time I want stronger deliverables defined.
  • No accessibility constraints: Nearly every AI prototype had readability problems. I need to improve the starting prompts to include some guardrails.
  • No screenshot discipline: Prototype iterations lived in 6 locations, not easily recoverable for the retrospective.

Design Sprint Process v2

1. Define the Sprint Deliverable
Choose the target artifacts: presentation, video, case study, or prototype demo.
2. Research & Explore
Ground the sprint with research, define the anchor card, set accessibility and tone requirements.
3. Make
Prototype and document decisions with screenshots at each step.
4. Test / Tweak
Validate against the deliverable. Review example data for real-world accusation implications.
5. Sprint Closeout
Capture final state, the anchor card, designate artifacts as verified or illustrative.
Deliverable: Sprint Anchor Card
  • Problem statement (1-2 sentences)
  • Customer (who pays, why)
  • User (who, context, what they need)
  • Key constraints (accessibility, tone, audience)
  • 2-3 source verified facts to ground design direction
Changes Made:
  • Defined deliverables: Added step "Define the Sprint Deliverables" first
  • Consolidated inseparable steps: Combined steps 1–3 into one step "Research & Explore"
  • Tone constraint: Added tone definition (i.e. tool, not weapon) to Research & Explore
  • Documentation rigor: Added artifact "Sprint Anchor Card" to be maintained throughout the Design Sprint
  • Documentation rigor: Added artifacts to the Make step (screenshots, url's, tips for each prototype) and Test/Tweak (changes made, screenshots)
  • Added human judgement review: for example data in Test/Tweak phase
  • Finishing cleanly: Added step "Sprint Closeout" to wrap up deliverables
GO ✓

Sprint 1 closed with a Go decision. The concept is real. The process is better. What comes next isn't another sprint — it's a structured validation phase before narrowing further.

What comes between sprints

Not every next step is a full sprint. This phase validates before building further — and may loop before moving on.

Research & Validate
Users, competitors, technical feasibility. Don't build further on an assumption.
Synthesize & React
Review findings, update the brief, decide what changes — and what doesn't.
Targeted Iterations
Minimum effective change before the next validation loop.
Focused Design Sprint
One problem space at a time, as the concept matures.

Fair Seas · Sprint 1 · Cami Farley

×