Built to grow without breaking
Every piece of Transform Platform follows the Open/Closed principle โ add capabilities without touching what already works.
Stream-First Engine
Records flow as a Kotlin coroutine Flow โ no full load into memory. Process gigabyte files on a laptop without breaking a sweat.
Learn more โPlug-in Parsers
Add a new file format by implementing one interface and adding @Component. Spring auto-discovers it โ zero changes to core.
Learn more โErrors Stay Local
Validation errors attach to individual records, not the pipeline. The stream never stops โ bad records are quarantined, good ones ship.
Learn more โDynamic Integrations
Add or rotate SFTP, Kafka, REST, and S3 connections at runtime via API. Hot-reload with AES-256 credential encryption โ no restarts.
Learn more โSpec-Driven
FileSpec owns the schema. Parsers, correction rules, and validators all derive from it โ a single source of truth per file format.
Learn more โTest-Friendly by Design
Open/Closed architecture: every parser, writer, and rule is independently testable. Kotest BDD specs ship with every module.
Learn more โConnect to anything, dynamically
Add or update client connections via API at runtime. Credentials are AES-256 encrypted, connectors hot-reload without service restarts.
Modular by design
Each module has a single responsibility. Deploy only what you need.
platform-commonShared models, interfaces & utilitiesplatform-corePipeline engine ยท parsers ยท writers ยท rulesplatform-apiREST API ยท FileSpec managementplatform-pipelineOrchestration ยท schedulingplatform-schedulerQuartz-backed job managementNew format in minutes
Implement one interface, drop one annotation. Spring discovers your parser or writer automatically โ no changes to the registry, pipeline, or any existing code.
@Component
class NachaFileParser : FileParser {
override fun supports(format: FileFormat) =
format == FileFormat.NACHA
override fun parse(
stream: InputStream,
spec: FileSpec
): Flow<ParsedRecord> = flow {
// your parsing logic here
emit(ParsedRecord(...))
}
}Ready to transform your data pipeline?
Explore the docs, clone the repo, and have a working pipeline in under an hour.