Architecting AI-Native Intelligence Systems
Senior AI Product Lead at Liminal, transforming market intelligence from data aggregation into self-improving AI-native platforms with competitive moats.
Building the evolution from Events (raw market signals) → Insights (pattern recognition) → Intelligence (actionable recommendations) through multi-agent orchestration, physics-based modeling, and semantic understanding.
Reimagining Business Relationship Intelligence
Nodal Graph Initiative
Physics-based visualization encoding multi-dimensional business relationships
The Problem
Static visualizations don't capture the multi-dimensional nature of entity relationships. Graphs show connections but miss nuances of relationship strength, market influence, competitive dynamics, and strategic alignment.
The Solution
Built a proprietary taxonomy - the Liminal Taxonomy - a physics-based force-directed graph with encoded metrics as forces:
- — Market capitalization → Node mass (inertia and gravitational influence)
- — Investor relationships → Gravitational forces (attraction strength)
- — Strategic alignment → Variable edge tension (partnership quality)
- — Market competition → Semantic repulsion (competitive dynamics)
- — Market volatility → Thermodynamic temperature (system stability)
Automating Intelligence — From Raw Events to Strategic Insights
AI Insights Agent
Multi-agent correlation engine bridging events and actionable intelligence
The Problem
Individual market events are acute data points. The strategic question is never "what happened?" but "what does this pattern mean?" Manual correlation across 500+ intelligence sources is economically unviable — analyst hours are expensive, survey data sits underutilized, and the insight layer between events and intelligence remained empty.
The Solution
Built a 10-tool LangChain-orchestrated AI agent that triggers on events flagged as "major," performs a minimum of 5 research steps across entity details, funding history, similar companies, investor networks, and use case data — then generates 3–5 high-quality insights correlating at minimum 3–4 events per insight. The system enforces non-obvious connections structurally, not aspirationally.
- — 3-Tier Intelligence Model — Implements Liminal's Events (data) → Insights (patterns) → Intelligence (prescriptions) architecture; this PRD delivers the Insights layer
-
—
Entity-Specific Framing —
entityFramingarray delivers "so what for you" contextualization per enterprise customer, translating market-level patterns into organization-specific meaning; filtered at both API and UI level by organizationID + entityID
Building a Knowledge Synthesis Platform
Team Spaces (v1 → v1.5 → Analyst Desk)
Three-phase evolution from collaborative workspace to analyst-augmented intelligence delivery
The Problem
Individual intelligence tools create information silos. Users can't collaborate on analysis, share curated insights with teams, or build organizational knowledge repositories — limiting adoption and creating low switching costs.
v1.5 identified a deeper inversion: the platform treated user intelligence as secondary to Liminal objects — comments as footnotes to data. Users generate knowledge first (from customer calls, conferences, internal analysis) then seek supporting data. The platform inverted this workflow. Additionally, extracting findings from curated Space content required hours of manual review across multiple content types.
The Solution — Three Phases
Phase 1 — Team Spaces v1
Organization-scoped collaborative Spaces with 5 content types (Companies, Products, Events, Reports, Use Cases), role-based permissions (View/Edit/Admin), comments, @mentions, reactions, live references, and share links with configurable public/private access.
Phase 2 — Team Spaces v1.5 (Knowledge Synthesis)
Transformed Spaces from content curation into knowledge synthesis. Two capabilities:
- — Analysis Objects: User-generated knowledge as first-class content — rich text with linked Liminal objects, dedicated Analysis tab with sidebar navigation. Notion-inspired functionality anchored to proprietary Liminal data, bringing knowledge creation into the platform rather than exporting to external tools.
- — Chat with Space (Spaces Copilot): Large context window LLM integration (Gemini 2.5 Flash, 1M+ token context) enables natural language queries across all Space content. Returns synthesized findings in <5s. Grounded exclusively in Space content — no external data, no hallucination. Target: 0.4+ DAU/WAU stickiness ratio.
Phase 3 — Analyst Desk
Extended Team Spaces to support Liminal analyst participation in customer Spaces. Analysts can be explicitly added as members, post Custom Analyses with individual attribution (name + color/badge identifier), and interact with customers — while customers see clear visual indicators of analyst involvement. Built a centralized Analyst Desk engagement inbox for managing customer engagement across all organizations, with assignment tracking and in-context response capability. Positioned as the transition from self-service platform to analyst-augmented intelligence delivery.
Key Capabilities
- — Analysis Objects — User intelligence as first-class content; linked to Liminal objects as supporting references (not footnotes)
- — Spaces Copilot (Chat with Space) — Large context window LLM, synthesizes findings across all content types in <5s; grounded exclusively in Space content; private per-user conversation history
- — Analyst Desk — Liminal analysts embedded in customer Spaces with visual indicators; centralized engagement inbox with assignment/status tracking; dual permission model (omnipresent admin vs. explicit member)
- — Product-Led Growth Loop — Share functionality creates viral adoption; team repositories create switching costs through organizational knowledge accumulation
- — Retention Architecture — 90-day retention target for Space users 70%+ vs. non-Space users; habitual usage through Analysis creation + Chat consumption loop
Enabling Organization-Adaptive Intelligence
Custom Use Case Creator
AI-assisted workflow transforming rigid taxonomy into organization-adaptive platform
The Problem
Liminal's taxonomy was fixed at build time — universe, domains, industries, verticals, sub-verticals. For customers in niche or non-standard markets, that mismatch was a real blocker: their workflows didn't map, their entities weren't surfaced, and taxonomy team customization ran weeks to months. The platform couldn't flex to them.
The Solution
AI-assisted self-service workflow enabling organizations to create custom use cases matching their specific business context:
- — Natural language input: Users describe use case needs in plain language or reference existing use cases
- — AI taxonomy mapping: Automatic mapping to Liminal's taxonomy hierarchy (universe/domains/industries/verticals/sub-verticals)
- — Structured content generation: AI generates use case name, definition, workflow steps, product capabilities
- — Entity suggestions: AI recommends relevant products/entities from organization's existing data
- — Iterative refinement: Users review, edit, and refine AI suggestions before finalizing
Leadership & Execution Excellence
AI/ML Technical Architecture
AI/ML Systems
- — Graph Neural Networks for relationship embeddings
- — Multi-agent LLM orchestration with tool routing
- — ML-based clustering and community detection
- — Semantic search with embedding optimization
- — Temporal prediction models for ecosystem evolution
System Architecture
- — Web Worker-based physics simulation (D3 + Pixi.js)
- — WASM acceleration for performance at scale
- — Vector database design with progressive loading
- — Knowledge graph integration and relationship scoring
- — Hierarchical MongoDB collection architecture