Strategic Climate Risk Modeling for Organizational Decision-Making
A technical proposal for a scalable Science-as-a-Service platform that empowers organizations to understand and act on climate risks
About the Candidate
Evie Marie Kolb
Technical Leadership & Climate Tech Innovation
I'm a full stack software engineer with more than 5 years of industry experience and I am passionate about building systems that make complex climate science accessible and actionable. My approach combines a deep understanding of how intential, carefully architected distributed systems and development lifecycle organizational principles and best practices with a strong sense of belief and purpose in using my professional expertise to forge a better future for all of us.
This proposal represents my vision for how we can transform climate risk assessment from an academic exercise into a practical tool that drives real-world decision-making and empowers organizations and individuals with a real sense of agency at a time of monumental crisis.
Beyond the Code
Outside of work, I stay engaged with climate tech innovation and strategic investing through podcasts and reading. I'm an avid rock climber and enjoy rock climbing with my wife and 4-year-old daughter. I love creative pursuits like cooking, singing, dancing, acting, sketching, and calligraphy, as well as staying active through running, biking, and swimming. I also enjoy tinkering with custom Linux configurations and home automation projects.
The Challenge We're Solving
Data Complexity
Climate science produces massive, highly technical datasets that require specialized knowledge to interpret and apply
Access Barriers
Organizations lack the technical infrastructure and expertise to work with geospatial climate data at scale
Decision Gap
Business leaders need clear, actionable insights about climate risks specific to their assets and operations
Our platform bridges the gap between world-class climate science and practical business decision-making, making sophisticated risk analysis accessible to organizations of all sizes.
Key Assumptions and Risks
Key Assumptions
  • GCP infrastructure will provide the scalability and reliability needed for climate data workloads.
  • Zarr format will meet performance requirements for both storage and visualization.
  • Multi-tenant architecture can effectively isolate customer data while maintaining performance.
  • Climate science data sources will remain accessible and can be integrated into our pipelines.
  • Target customers have the technical capacity to integrate with our APIs and use web-based mapping tools.
Key Risks and Mitigations
  • Data format performance: Mitigated by parallel exploration of tile server and Zarr-in-browser approaches with benchmarking.
  • Scalability under load: Mitigated by cloud-native architecture with auto-scaling and serverless components.
  • Data security and compliance: Mitigated by principle of least privilege, encryption at rest and in transit, and SOC 2 compliance roadmap.
  • Integration complexity: Mitigated by GraphQL API flexibility, strong type safety throughout monorepo and comprehensive SDK development.
  • Vendor lock-in: Mitigated by containerization, infrastructure-as-code, and a ports/adapter architecture enabling multi-cloud and multi-service portability.
Project Overview
Vision
A multi-tenant SaaS platform that transforms complex climate datasets into clear, actionable risk assessments tailored to each organization's specific assets and geography.
Core Value Proposition
  • Self-service climate risk analysis
  • Asset-specific scenario modeling
  • AI-powered insights and reporting
  • Enterprise-grade security and scalability
Development Roadmap
1
Phase 1: Foundation
Proposal & Architecture
  • Complete technical specifications
  • Stakeholder alignment
  • Repository scaffolding
2
Phase 2: MVP Build
Core Platform Development
  • Multi-tenant infrastructure
  • Data ingestion pipeline
  • Basic risk assessment tools
3
Phase 3: Enhancement
AI & Advanced Features
  • Automated report generation
  • Scenario modeling
  • Custom visualizations
4
Phase 4: Scale
Production & Growth
  • Beta customer onboarding
  • Performance optimization
  • Feature expansion
Target delivery for MVP: Q2 2025, with phased feature rollouts continuing through Q4 2027
Architecture Philosophy
Our technical approach prioritizes scalability, security, and scientific accuracy while maintaining the flexibility to evolve with emerging climate science and customer needs.
01
Cloud-Native by Design
Built on Google Cloud Platform for global scale, reliability, and access to cutting-edge AI capabilities
02
Multi-Tenant Security
Strict data isolation ensures each organization's information remains private and secure
03
Plugin Architecture
Extensible design allows us to add new features, do A/B testing of parallel feature design paths, and migrate from one IaaS/SaaS service to another in a matter of days rather than weeks, all without disrupting existing functionality
04
API-First Integration
GraphQL API enables seamless integration with existing enterprise systems and workflows
System Architecture Overview
Our architecture follows a modern, layered approach that separates concerns while maintaining high performance. From edge delivery through authentication to backend services and data persistence, each layer is designed for independent scaling and maintenance.
Client Experience Layer
Tenant Web Application
A responsive Next.js application providing intuitive access to climate risk assessments, asset management, and scenario exploration.
  • Interactive maps and visualizations
  • Asset portfolio management
  • Custom report generation
  • Collaborative workspace
Administrative Dashboard
Internal tools for platform management, tenant onboarding, and system monitoring.
  • Tenant provisioning and configuration
  • Usage analytics and billing
  • Data quality monitoring
  • Support ticket management
Edge & Delivery Infrastructure
Global CDN
GCP Cloud CDN caches static assets and frequently accessed data globally, ensuring fast load times for users worldwide
Intelligent Routing
Global Load Balancer handles HTTPS termination, SSL, and routes traffic to the nearest healthy backend instance
DDoS Protection
Built-in protection against distributed attacks and traffic spikes maintains platform availability
This edge infrastructure ensures speedy initial page loads and resilient service delivery even during traffic surges or regional outages.
Authentication & Security
GCP Identity Platform
Enterprise-grade authentication supporting SSO, MFA, and multiple identity providers (Google, Microsoft, SAML)
Multi-Tenant Isolation
Each organization's data is strictly segregated at the database and storage layers, with row-level security policies
Role-Based Access Control
Granular permissions system allows organizations to define custom roles and access policies for their teams

All authentication tokens are short-lived and refreshed automatically. API requests include tenant context validation at every layer to prevent cross-tenant data leakage.
Backend Services Architecture
GraphQL API
Apollo Server running on Cloud Run provides a flexible, type-safe API for all client interactions. Multi-tenant context is enforced at the resolver level.
Background Workers
Containerized services handle compute-intensive tasks like geospatial transformations, raster reprojections, and scenario calculations.
AI Risk Reports
Vertex AI-powered service generates natural language insights and recommendations based on climate model outputs and asset characteristics.
All services are stateless, containerized, and auto-scale based on demand. This serverless approach minimizes operational overhead while maximizing reliability.
Data Layer Strategy
Structured Data
Cloud SQL (PostgreSQL)
  • User accounts & permissions
  • Asset metadata & locations
  • Scenario configurations
  • Analysis job tracking
Core Climate Data
Cloud Storage (Immutable)
  • Source raster datasets
  • Historical climate records
  • Model baseline outputs
  • Static reference data
Derived Outputs
Cloud Storage (Generated)
  • Zarr pyramids for visualization
  • Cloud-optimized GeoTIFFs
  • Scenario-specific tiles
  • Cached analysis results
Our hybrid storage strategy optimizes for both query performance and cost efficiency. Frequently accessed data is pre-processed into formats optimized for web delivery.
Data Format Strategy: Zarr as Source of Truth
Choosing the right data format is critical for a cloud-native climate risk platform that needs to handle massive geospatial datasets efficiently. Our strategy focuses on optimizing for performance, scalability, and integration within a modern cloud ecosystem.
Why Zarr?
  • Cloud-native architecture: Built specifically for object storage (GCS, S3, Azure Blob) with chunked storage that enables parallel reads/writes.
  • Efficient data access: HTTP range requests allow reading only the data chunks needed, avoiding expensive full-file downloads.
  • Compression and performance: Reduces storage costs while maintaining fast access times.
  • Scalability: Handles terabyte-scale climate datasets that traditional formats like GeoTIFF struggle with.
  • Python ecosystem integration: Works seamlessly with xarray and Dask for distributed computing and analysis.
Frontend Visualization: Active Exploration
For rendering Zarr data in the browser, multiple viable approaches exist. We are actively evaluating options to find the best balance of performance, user experience, development complexity, and long-term maintainability.
  • Custom WebGL layers: Leveraging frameworks like zarr-gl for Mapbox/MapLibre.
  • Deck.gl: For GPU-accelerated large-scale visualization of complex datasets.
  • Tile generation pipelines: For traditional web mapping services to serve pre-rendered data.
  • Direct browser-based Zarr reading: Utilizing libraries such as zarr.js for client-side data access.
For the MVP, we will explore both traditional tile server-based rendering and a Zarr-in-the-frontend approach side by side. This parallel exploration will enable performance benchmarking, A/B testing, and real-world evaluation to determine which approach best serves our use case. We're currently investigating map tiles generated from Zarr as well as Zarr pyramids for multi-resolution data access. Ultimately, we may adopt both approaches for different data visualization workflows based on our findings.
Event-Driven Processing
01
Request Submission
User initiates a risk assessment or scenario analysis through the web interface
02
Job Queuing
API publishes job details to Pub/Sub topic, returning immediate acknowledgment to user
03
Worker Processing
Background worker picks up job, performs geospatial computations, and generates outputs
04
Result Storage
Processed data is written to Cloud Storage and metadata updated in Cloud SQL
05
Notification
User receives real-time notification that their analysis is complete and ready to view
This asynchronous architecture allows us to handle computationally expensive operations without blocking user interactions or timing out requests.
Monorepo Structure
Our monorepo approach keeps all related code in a single version-controlled repository, making it easier to coordinate changes across services while maintaining clear boundaries between components.
/apps/
Application entry points and deployable services
/packages/
Shared libraries and domain logic
/config/
Environment and infrastructure configuration
/docs/
Architecture documentation and decision records
Apps Directory
Deployable applications that serve different user types and processing needs.
apps/api (GraphQL API)
GraphQL API server with src/server.ts, bootstrap.ts, context.ts, resolvers/, and schema/. Apollo Server providing type-safe API layer.
apps/web (Tenant Web Application)
Next.js tenant application with src/pages and components. Customer-facing web interface for risk assessment and visualization.
apps/admin (Administrative Dashboard)
Administrative interface for platform management and configuration.
apps/worker (Background Worker)
Background processing service with src/main.ts and jobs/. Handles computationally expensive geospatial operations asynchronously.
Packages Directory - Core Domain
Shared libraries containing business logic, data access, and interface definitions.
packages/domain
Pure business logic independent of infrastructure or service implementations.
packages/db
Database layer with prisma/schema.prisma and src/client.ts. Centralized data model and database client configuration.
packages/core
Shared utilities, configurations and logging infrastructure used across all services.
Packages Directory - Ports & Adapters
Interface definitions and concrete implementations following hexagonal architecture principles.
packages/ports
  • StorageService
  • LLMService
  • TelemetryService
  • AuthService
  • UserRepository
  • AssetRepository, etc.
Defines contracts without implementation details.
packages/adapters
  • storage/ (gcp-storage.ts, local-fs.ts)
  • llm/ (openai-llm.ts, vertexai-llm.ts)
  • auth/ (auth0.ts, gcp-identity.ts)
  • telemetry/ (gcp-telemetry.ts, console-telemetry.ts)
  • orm/ (prisma.ts)
Packages Directory - SDKs
Client libraries for integrating with the platform across different environments.
packages/sdk-core
Core SDK functionality and shared logic used by platform-specific implementations.
packages/sdk-react
React-specific hooks and components for building web applications on top of the platform.
packages/sdk-js
Vanilla JavaScript SDK for Node.js and browser environments without framework dependencies.
Config & Documentation
Environment configuration, infrastructure as code, and architectural documentation.
config/
Environment configuration with:
  • env/ (.env.local.example, .env.dev.example, .env.staging.example, .env.staging.example, .env.prod.example)
  • ci/ (github-actions.yml)
  • infra/terraform/ for infrastructure as code.
docs/
Architecture documentation with:
  • architecture/ (Containers Diagram, Plugin Architecture, MVP Implementation, Philosophy)
  • adr/ (Architecture Decision Records like ADR-0001-initial-architecture.md)
MVP Core Components
Tenant Management
Self-service organization setup, user invitation, and role assignment. Organizations can manage their own team members and access permissions without admin intervention.
Asset Registration
Upload and geocode asset locations through CSV import or manual entry. Supports both point locations and polygon boundaries for facilities, infrastructure, and land holdings.
Risk Assessment Engine
Core calculation engine that intersects asset locations with climate hazard layers to generate risk scores across multiple scenarios and timeframes.
Interactive Visualization
Web-based mapping interface showing asset locations overlaid on climate hazard data with time-series animation and scenario comparison tools.
Basic Reporting
Exportable PDF reports summarizing risk scores, methodology, and data sources for compliance and stakeholder communication.
Data Ingestion Pipeline
Automated system for processing and cataloging new climate datasets as they become available from research partners and data providers.
Future Enhancements
Beyond the MVP, we've designed the platform to support advanced capabilities that will differentiate our offering and provide sustained value as customers mature their climate resilience programs.
AI-Powered Insights
Natural language report generation, anomaly detection in risk patterns, and intelligent scenario recommendations based on asset characteristics and industry benchmarks
Third-Party Integrations
Pre-built connectors for Enterprise Resource Planning (ERP) systems, GIS platforms, and ESG reporting frameworks. API webhooks for custom workflow automation
Collaborative Features
Shared workspaces, commenting on analyses, and approval workflows for multi-stakeholder risk assessment processes
Custom Scenario Modeling
Allow users to define their own climate scenarios by adjusting model parameters and combining multiple hazard types
Web Application UX Principles
Design Philosophy
  • Progressive Disclosure: Show simple views by default with options to drill into technical details
  • Contextual Guidance: In-app explanations of climate metrics and methodology without requiring prior expertise
  • Visual Hierarchy: Use color, size, and position to guide attention to the most critical risk indicators
Key User Flows
  1. Onboarding walkthrough introducing core concepts
  1. Quick-start asset upload with validation feedback
  1. Guided scenario selection based on industry and geography
  1. Results dashboard with drill-down to detailed risk factors
  1. One-click report generation for common use cases
Every interface decision prioritizes clarity over complexity, ensuring that users can confidently interpret results without requiring a background in climate science.
Security & Compliance Framework
Data Encryption
  • TLS 1.3 for all data in transit
  • AES-256 encryption for data at rest
  • Encrypted backups with key rotation
Access Controls
  • Least-privilege IAM policies
  • Multi-factor authentication required
  • IP allowlisting for sensitive operations
Compliance Ready
  • SOC 2 Type II aligned architecture
  • GDPR data residency options
  • Regular penetration testing

GCP Secret Manager ensures all API keys, credentials, and configuration secrets are encrypted, versioned, and audited with no secrets in code repositories.
The Right Fit for This Challenge
The Platform Solution
Our platform delivers robust climate risk modeling, transforming complex data into strategic insights for organizational decision-making.
The Right Technical Leader
I bring deep expertise in cloud architecture and full-stack development, a proven ability to architect and build enterprise-grade systems from the ground up, and a genuine passion for climate tech innovation that drives me to stay current with the field.
The Impact Together
This partnership combines the right technical solution with the right technical leader to bridge the gap between climate science and business strategy, empowering your organization to navigate future climate risks with confidence.