Friday, April 25, 2025

What is Dapr and how can it help reduce technical debt?

“Every line of code is a liability.”
Every senior engineer, ever

Modern software teams juggle dozens of cloud services, multiple languages, and a relentless delivery schedule. Microservice architectures promise flexibility, yet they often leave you buried under a mountain of glue code—custom retry logic here, hand-rolled pub/sub wrappers there, bespoke secret-store adapters everywhere. Eventually that glue hardens into technical debt.

Dapr (Distributed Application Runtime) tackles that debt head-on.




Quick Context: The Cloud-Lock-In Trap

With AWS, Azure, GCP, and friends racing to out-innovate each other, it’s tempting to lean hard on a provider’s “secret sauce” (DynamoDB, Event Grid, Pub/Sub, etc.). But deeper adoption ≈ deeper lock-in.
When leadership later asks, “Can we move to a cheaper region? A different provider?” you discover that exit costs—rewriting SDK calls, redeploying infra, untangling event contracts—can dwarf the original build cost.

Introduction
With all the new cloud implementations (AWS, Azure, Google, etc.) there has been an emergence of microservices, and other applications that have taken advantage of the diverse and unique products associated with each platform. This adoption has the unfortunate drawback of forcing an organization to be completely dependent on whichever platform they choose to begin with. Any decision made to change cloud providers is met with significant overhead and technical debt. Dapr aims to address this exact scenario.


Dapr in a Nutshell

Dapr is an open-source sidecar runtime that bolts a consistent, language-agnostic API onto your services. Whether you write in Go, C#, Python, or JavaScript, Dapr gives each microservice the same set of HTTP/gRPC endpoints for common distributed-system chores:

Building BlockWhat It SolvesTypical Debt It Kills
Service InvocationSecure, resilient calls across services (with mTLS, retries, tracing).Custom client libraries, brittle circuit breakers.
State StoreKey/value state via plugs for Redis, DynamoDB, Cosmos DB, etc.Ad-hoc cache layers, provider-specific SDK code.
Pub/SubEventing over Kafka, RabbitMQ, Azure Service Bus, Google Pub/Sub…Cloud-specific publish code, DIY retry/ordering logic.
BindingsIngress/egress to external systems (S3, Blob Storage, Twilio, CRON…).Cron containers, webhook boilerplate.
SecretsUnified secrets pull from Vault, AWS Secrets Manager, Key Vault…Scattered env-var hacks, bespoke secret loaders.
ActorsVirtual actor model for long-lived workflow/state.Homemade saga/timeout managers.

Because Dapr runs as a sidecar (container or process) your code talks to localhost—never to the underlying cloud broker directly. Swap Redis for DynamoDB? Just change a YAML component file; no recompile needed.


Five Concrete Ways Dapr Reduces Technical Debt

  1. Eliminates Boilerplate

    Retry policies, exponential back-off, distributed tracing headers, JSON serialization—Dapr bakes these into the runtime so you stop copy-pasting NuGet/NPM packages.

  2. Abstracts Cloud Primitives

    Need a queue? Use /v1.0/publish and point it at SQS today, RabbitMQ tomorrow. You free the codebase from vendor SDK lock-in.

  3. Standardizes Cross-Team Practices

    Dapr’s APIs create a contract that every squad follows. No more “payments team rolled their own protobuf format.” Fewer patterns to document; fewer surprises in on-call rotations.

  4. Eases Language Polyglot

    Mixed stack (Rust for data-crunching, Python for ML, .NET for web)? Each talks the same Dapr protocol. There’s no need to hunt down language-specific clients for every cloud service.

  5. Improves Testability

    Spin up Dapr components in Docker Compose or Kubernetes Kind. Unit tests hit localhost mocks; integration tests swap in the real Redis/Kafka only in CI. Cleaner seam, cleaner tests.


Hello, Dapr! (Tiny Sample)

bash
# 1. Run two sidecars locally dapr run --app-id orderapi --app-port 5000 dotnet OrderApi.dll dapr run --app-id shippingapi --app-port 6000 node ShippingApi.js

In OrderApi you invoke Shipping like this—no AWS SDK, no gRPC stubs:

csharp
var client = DaprClient.Create(); await client.InvokeMethodAsync<ShipRequest, ShipResponse>( "shippingapi", "create-shipment", payload);

Switch queues? Just edit components/pubsub.yaml:

yaml
apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: orders-pubsub spec: type: pubsub.azure.servicebus # change to pubsub.kafka, pubsub.redis, etc. metadata: - name: connectionString value: "<secret>"

No code change, no re-deploy of the service binaries—debt avoided.


When Dapr Isn’t a Silver Bullet

  • Simple monoliths may not need the extra hop of a sidecar.

  • Very high-throughput apps (e.g., tick-level trading) should benchmark Dapr overhead.

  • Deeply specialized vendor PaaS features (Athena, BigQuery) still require their SDKs.

But for 90 % of microservice concerns—state, events, secrets, service discovery—Dapr’s abstractions outweigh the added operational piece.


Key Takeaways

  • Technical debt often hides in plumbing code and vendor lock-in, not in business logic.

  • Dapr’s neutral APIs carve that plumbing out of your repos and config.

  • Less boilerplate → fewer bugs, faster feature work, cheaper cloud-migration paths.

  • Since Dapr is open-source and CNCF-incubating, you’re not trading one lock-in for another.

Rule of Thumb
If your backlog includes “migrate from X queue to Y queue,” “standardize retries,” or “add distributed tracing,” try Dapr first—it might turn an epic into a one-liner YAML change.


Further Reading

Cut the glue. Kill the debt. Give Dapr a spin on your next microservice.

Proposal for Integrating Apache Kafka and Redis for Real-Time Dashboard

 Executive Summary

An existing example dashboard stack—comprising a Next.js front-end, a legacy monolithic API that uses SignalR and a timer service, a database, an in-memory cache, and a TTForwarder console application (a hypothetical application that forwards real-time data)—suffers from latency, scaling limits, and fragile caching.
A modernized architecture that pairs Apache Kafka for high-throughput data streaming with Redis for low-latency caching and Pub/Sub notifications can deliver true real-time performance, simpler operations, and easier horizontal scalability.





Current Challenges

Pain PointImpact
Bloated monolithThe legacy API is large, tightly coupled, and hard to maintain.
Inefficient ingestionTTForwarder pushes updates directly to the database; replacing it with a polling .NET service introduces additional latency.
Unreliable cacheIn-memory caching is non-persistent—data vanishes on restart.
Update latencySignalR combined with timer-based polling does not guarantee sub-second updates.

Proposed Architecture at a Glance

vbnet
Trading Technologies (TTSDK) │ ▼ ┌────────────────┐ │ TTDataIngester │ ➜ Kafka topic: tt_raw_data └────────────────┘ │ ▼ ┌────────────────┐ │ DataProcessor │ ➜ Kafka topic: tt_processed_data + DB writes └────────────────┘ │ ▼ ┌────────────────┐ │ CacheUpdater │ ➜ Redis KV + Redis Pub/Sub (tt_updates) └────────────────┘ │ ▼ ┌────────────────┐ WebSocket │ DashboardAPI │ ─────────────────► Next.js Dashboard └────────────────┘
ComponentRoleTech Highlights
TTDataIngesterSubscribes to TTSDK streams and publishes raw data to Kafka.Confluent Kafka .NET client
DataProcessorConsumes raw data, performs calculations, writes to DB, and republishes processed messages.Stateless C# workers
CacheUpdaterKeeps Redis in-sync and issues real-time notifications via Redis Pub/Sub.StackExchange.Redis
DashboardAPIServes initial data from Redis and pushes updates to WebSocket clients.Minimal-API + WebSockets
Next.js DashboardRenders data, maintains a WebSocket connection for live updates.SSR + React hooks

Data Flow

  1. Ingestion – TTDataIngester → tt_raw_data topic

  2. Processing – DataProcessor → tt_processed_data topic + database

  3. Caching / Notify – CacheUpdater → Redis key-values + tt_updates Pub/Sub

  4. Initial Load – DashboardAPI fetches cached data for the Next.js client

  5. Real-Time Push – DashboardAPI listens on tt_updates and forwards updates over WebSockets

  6. UI Refresh – The dashboard updates UI elements immediately on receipt


Key Benefits

  • Real-Time Performance – Kafka + Redis Pub/Sub achieve sub-second end-to-end latency.

  • Horizontal Scalability – Independent services scale out; Kafka partitions and Redis clustering handle load spikes.

  • Reliable Caching – Redis persistence eliminates data loss on restarts.

  • Simplified Maintenance – Replacing the monolith with focused services reduces code-base complexity.

  • Fault Tolerance – Kafka replication and Redis AOF/RDB snapshots provide durability.


Implementation Checklist

  • Kafka – Create tt_raw_data and tt_processed_data topics with appropriate partitions/replicas.

  • Redis – Enable AOF persistence; secure with TLS; expose only to internal services.

  • DashboardAPI – Use a lightweight WebSocket implementation (e.g., native System.Net.WebSockets or SignalR in hub-less mode).

  • Security – Encrypt traffic (Kafka SASL_SSL, Redis TLS) and restrict network access.

  • Monitoring – Track Kafka consumer lag, Redis memory usage, and WebSocket connection counts.


Potential Challenges & Mitigations

ChallengeMitigation
End-to-end latency spikesDeploy Kafka and Redis on low-latency hosts; tune broker and OS network settings.
Data consistencyMake DataProcessor idempotent and use Kafka offsets plus DB constraints to avoid duplicates.
Scaling write loadAdd Kafka partitions; spin up more CacheUpdater instances; shard Redis if needed.
Failure recoveryEnable Kafka dead-letter topics and automatic retries; configure Redis replication failover.

Comparison with the Current Setup

AspectLegacy ApproachProposed Approach
Data ingestionTTForwarder writes directly to DB (adds latency)Stream to Kafka for immediate downstream processing
Processing.NET service updates DB onlyDataProcessor writes to DB and re-publishes results
CachingNon-persistent in-memory cacheRedis persistent cache
Real-time deliverySignalR + timers (variable delay)Redis Pub/Sub + WebSockets (instant)
ScalabilityLimited by single monolithKafka + micro-services scale independently
ReliabilityRisk of data loss on restartsKafka replication + Redis persistence

Adopting this Kafka-plus-Redis architecture replaces a fragile, tightly coupled system with a responsive, modular, and future-proof platform that delivers a true real-time trading dashboard experience.

Sample Database Migration Plan for EF-Core to FluentMigrator

 Executive Summary

A monolithic web solution (“MainWeb”) and a companion service (“Jobs.Api”) both use Entity Framework Core (EF Core) migrations against a shared PostgreSQL database. Maintaining duplicate entity definitions and migrations across two projects is inefficient—especially because Jobs.Api intends to drop EF Core altogether. This proposal outlines migrating both codebases to FluentMigrator, a framework-agnostic migration tool. A dedicated migration project will become the single source of truth, while Jobs.Api moves to Dapper for data access and MainWeb keeps EF Core strictly for querying. The approach streamlines schema management and supports an eventual move away from a monolith.


Current Situation

ComponentStatus
MainWeb (Monolith)Uses EF Core for data access and migrations; migration history lives in __EFMigrationsHistory.
Jobs.ApiShares database objects with MainWeb; currently EF Core-based but aiming to remove that dependency.
Shared DatabaseOne PostgreSQL instance managed by MainWeb’s EF Core migrations, requiring tight coordination.

Challenges

  • Duplication — Entities exist in two codebases, creating redundant work.

  • Migration Coordination — Two different data-access strategies must agree on each schema change.

  • EF Core Dependency — Jobs.Api’s desire to drop EF Core is blocked by MainWeb’s migration workflow.

  • Monolith Constraints — MainWeb’s size makes a full rewrite impractical in the short term.


Proposed Solution

PillarDetails
Central Migration ProjectCreate Database.Migrations, powered by FluentMigrator. It owns every schema change.
Jobs.Api TransitionRemove EF Core, adopt Dapper, and apply migrations via the shared FluentMigrator project.
MainWeb AdaptationKeep EF Core for querying, but stop using EF Core migrations. After each FluentMigrator run, manually synchronize the EF Core model.
Long-Term PathGradually refactor MainWeb into smaller services that also use FluentMigrator and lightweight data-access libraries.

Implementation Plan

StepObjectiveKey Actions
1. Create Database.MigrationsCentralize schema changes- Add FluentMigrator & Runner.
- Configure Postgres provider and VersionInfo table.
- Add an InitialMigration that marks today’s schema as baseline.
2. Transition Jobs.ApiEliminate EF Core- Remove EF Core packages.
- Add Dapper.
- Apply migrations on startup via the Runner.
3. Adapt MainWebKeep EF Core for data access only- Disable context.Database.Migrate() calls.
- Reference Database.Migrations.
- Update entities/DbContext after each migration.
4. Coordinate Schema ChangesPrevent drift- Author new migrations in Database.Migrations.
- CI/CD applies them.
- Jobs.Api auto-updates; MainWeb updates its model manually.
5. Preserve Existing EF Core HistoryAvoid disruption- Keep __EFMigrationsHistory as-is.
- Treat the initial FluentMigrator baseline as authoritative going forward.

Benefits

  • Single Source of Truth — One migration pipeline eliminates drift.

  • Flexibility for Jobs.Api — Dapper + FluentMigrator removes heavyweight ORM overhead.

  • Future-Proofing — Aligns with a gradual break-up of the monolith.

  • Version Control Friendly — FluentMigrator’s C# migrations are easy to review in Git.


Risks & Mitigations

RiskMitigation
Manual EF Core model updates may lagDocument the process; add validation tests to catch mismatches.
Extra coordination overheadDefine a clear “write-migration-first” workflow and automate via CI/CD.
Learning curve for FluentMigratorProvide quick-start guides and internal training sessions.
Long-term MainWeb refactor effortPlan phased service extraction to spread effort over time.

Timeline (Indicative)

PhaseTasksDuration
Setup Migration ProjectCreate Database.Migrations, configure baseline1–2 weeks
Transition Jobs.ApiRemove EF Core, add Dapper, wire up migrations2–3 weeks
Adapt MainWebDisable EF Core migrations, document model-sync steps1–2 weeks
Process & CI/CDFormalize workflow, automate migration execution1 week
Long-Term RefactorGradual decomposition of the monolithOngoing

Conclusion

Moving to FluentMigrator centralizes database evolution, lets Jobs.Api shed EF Core, and positions the system for a future microservice architecture. By balancing short-term practicality with long-term flexibility, this plan reduces duplication, lowers maintenance overhead, and creates a clear, auditable history of schema changes.

Checkout my FluentRun project on Github. It is a FluentMigrator tool that runs seamlessly with any .net project: https://github.com/DotNetDeveloperDan/FluentRun

Fluent Migrator as a Standalone Option

Why a Stand-Alone FluentMigrator Project Beats EF Core Migrations for PostgreSQL

Modern .NET teams love how Entity Framework Core auto-generates migration scripts, but that convenience can come at a cost—especially when multiple databases, environment-specific tweaks, or non-EF data layers enter the picture.

This post walks through a practical example that swaps EF Core migrations for a dedicated FluentMigrator project plus a lightweight console utility. You’ll see how the combo delivers cleaner workflows, tighter control, and future-proof flexibility without giving up automation.

TL;DR

✔︎ FluentMigrator Console✖︎ EF Core Migrations
Works with any ORM—or noneTied to EF Core only
Multi-database orchestration out of the boxOne migration set per DbContext
Tags & config for dev / test / prodDIY filtering required
Explicit C# scripts (no surprise SQL)Auto SQL can be verbose or inefficient
Fits blue-green / zero-downtime releasesSchema and code deploy together

The Setup

Project Layout

css
src/ ├─ App/ └─ Database.Migrations/ ← standalone FluentMigrator project

Console Utility Highlights

  • Multi-Database Support – Reads a list of databases from appsettings.json and runs migrations for each.

  • Stub Generatorcreate command spits out timestamped C# templates so every migration starts consistent.

  • Flexible Commandsmigrate, rollback, rollback-prev, and rollback-all.

  • Environment Awareness – Pulls DOTNET_ENVIRONMENT / ASPNETCORE_ENVIRONMENT and loads the matching config file.

  • CI/CD Ready – Plays nicely with AWS CodeBuild + Secrets Manager.


Everyday Workflow

1. Define Your Databases

jsonc
// appsettings.json { "FluentMigrator": { "Databases": { "SalesDb": { "Provider": "Postgres", "ConnectionString": "Host=db;Port=5432;Database=sales;Username=app;Password=secret" } } } }

2. Generate a Migration Stub

bash
dotnet run create SalesDb AddUsersTable
csharp
// Migrations/SalesDb/202504211035_SalesDb_AddUsersTable.cs using FluentMigrator; namespace Database.Migrations.Migrations.SalesDb; [Migration(202504211035)] public class SalesDb_202504211035_AddUsersTable : Migration { public override void Up() { Create.Table("Users") .WithColumn("Id").AsInt32().PrimaryKey().Identity() .WithColumn("Name").AsString(100).NotNullable(); } public override void Down() => Delete.Table("Users"); }

3. Apply Migrations

bash
dotnet run migrate # applies pending migrations to every configured DB

4. Roll Back When Needed

bash
dotnet run rollback 202504211035 # jump to a specific version dotnet run rollback-prev # undo the last migration

Why Bother?—Seven Big Wins

1. Decoupled Deployments

Schema changes travel in their own artifact, so blue-green deploys aren’t blocked by code releases. Run a migration on staging, smoke-test it, then promote to production without rebuilding your app container.

2. ORM Freedom

Need Dapper for perf-critical queries? Considering a move away from .NET someday? FluentMigrator is agnostic, so the migrations still work.

3. Total Control

Hand-crafted C# lets you tune every column type and index. No more digging into auto-generated SQL that adds “mysterious” constraints or verbose table renames.

4. Environment-Specific Logic

Use [Tags("Development")] to seed test data or skip heavyweight indexes in dev. Tags run (or don’t) based on the environment—no fragile if-statements scattered through code.

5. Cleaner Pull Requests

Migration classes live in Database.Migrations. Reviewers focus on schema diffs instead of wading through unrelated service code.

6. Multi-Database Made Simple

One config file lists all target DBs. The utility loops through each, so there’s no copy-paste of DbContext projects or juggling multiple EF migration histories.

7. CI/CD Integration

A tiny buildspec.yml is all that’s needed:

yaml
version: 0.2 phases: install: commands: - dotnet tool install -g FluentMigrator.DotNet.Cli build: commands: - dotnet restore - dotnet build Database.Migrations.csproj - export CONNECTION_STRING=$(aws secretsmanager get-secret-value --secret-id salesdb --query SecretString --output text) - dotnet run --project Database.Migrations.csproj migrate

The pipeline builds once, applies migrations, and hands off a schema-ready database to the app deploy stage.


Where EF Core Still Shines

  • Quick prototypes – If everything lives in one small codebase and EF Core already maps your entities, auto-generated migrations might be “good enough.”

  • Simple single-DB apps – No extra project to maintain when the model is small and the team is tiny.

But once multi-DB scenarios, microservices, or strict DevOps pipelines show up, FluentMigrator plus a console orchestrator wins on predictability and control.


Wrap-Up

Automatic migrations feel effortless—until they aren’t. A dedicated FluentMigrator project makes database evolution predictable, testable, and independent of any single framework. Couple it with a smart console utility, and you gain:

  • Cross-database orchestration

  • Environment-aware scripts

  • Precise, human-readable migrations

  • Zero-downtime deploy possibilities

Give it a spin on your next feature branch—you might never look back.

New Features in .Net 10

🚀 Runtime Enhancements Stack Allocation for Small Arrays The Just-In-Time (JIT) compiler now optimizes memory usage by stack-allocating s...