This position is closed and is no longer accepting applications.

Mid Career Data Engineer, Digital Bank, Tokyo

Money Forward Minato-ku, Tokyo April 3 2026
  • 💴 ¥5.5M ~ ¥9.5M annually
  • 🏡
    Partially remote
  • 🗾 Japan residents only
  • 💬
    Business Japanese
    Business English
  • 🧪
    Intermediate level
    2+ years experience required

About Money Forward

Money Forward Minato-ku, Tokyo

Money Forward is a fintech startup delivering tools to visualize and improve both individuals'​ and companies'​ financial health.

Key benefits

  • Small but diverse team
  • Great support for OSS
  • Relocation support

About the position

Under the mission of “Money Forward. Move your life forward,” Money Forward aims to resolve the financial concerns and anxieties of individuals and businesses through the power of technology.
We have partnered with Sumitomo Mitsui Financial Group, Inc. and Sumitomo Mitsui Banking Corporation to establish a new company in preparation for the launch of a new digital bank.
We are currently seeking candidates for the position of Data Engineer as part of this initiative.

This position involves employment with Money Forward, Inc., and a secondment to the new company (SMBC Money Forward Bank Preparatory Corporation). The evaluation system and employee benefits will follow the policies of Money Forward, Inc.

Technology Stack

  • Cloud Infrastructure :
    • AWS (primary cloud platform in Tokyo region)
    • S3 for data lake storage with VPC networking for secure connectivity
    • AWS IAM for security and access management
  • Data Lakehouse Architecture :
    • Modern lakehouse architecture using Delta Lake for ACID transactions, time-travel, and schema evolution
    • Columnar storage formats (Parquet) optimized for analytics
    • Bronze/Silver/Gold medallion architecture for progressive data refinement
    • Partition strategies and Z-ordering for query performance
    • Unity Catalog for centralized governance and metadata management
  • Orchestration & Processing :
  • Databricks Workflows for managed workflow orchestration
  • Distributed data processing with Apache Spark on Databricks clusters
  • Serverless compute and auto-scaling clusters for cost optimization
  • Streaming and batch ingestion patterns with Databricks AutoLoader

  • Data Transformation :
    • dbt (data build tool) for SQL-based analytics engineering
    • Delta Live Tables for declarative ETL pipelines with built-in data quality
    • SQL and Python for data transformations
    • Incremental materialization strategies for efficiency
  • Query & Analytics :
    • Databricks SQL for high-performance analytics queries
    • Serverless and auto-scaling SQL warehouses for variable workloads
    • Auto-scaling compute for variable workloads
    • Query result caching and optimization
    • REST APIs for data serving to downstream consumers
  • Data Quality & Governance :
    • Automated data quality with Delta Live Tables expectations and Great Expectations
    • Cross-system reconciliation and validation logic
    • Fine-grained access control with column/row-level security using Unity Catalog
    • Automated data lineage tracking for regulatory compliance
    • Audit logging and 10-year data retention policies
  • Business Intelligence :
    • Amazon QuickSight and/or Databricks SQL Dashboards
    • Integration with enterprise BI tools (Tableau, PowerBI, Looker)

Tools Used

  • Version Control : GitHub
  • CI/CD : GitHub Actions
  • Infrastructure as Code : Terraform
  • Monitoring : Databricks monitoring, AWS CloudWatch integration
  • AI-Assisted Development : Claude Code, GitHub Copilot, ChatGPT

Development Structure

We operate in a small, agile team while collaborating closely with partners from the banking industry. The MIDAS team is growing rapidly, aiming for more than 10 data engineers within this year.

Responsibilities

As a Mid-level Data Engineer in the MIDAS (Management Integration & Data Analytics System) Data Platform Team, you will build from scratch and maintain the central data hub connecting most systems within one of Japan’s more innovative digital banks. You will work with modern cloud-based data technologies to ingest data from various banking systems, apply complex business logic, and serve it to downstream systems for enterprise management, regulatory reporting, risk management, and many other applications. Due to the high expectations in the banking domain, you will face complex data engineering challenges including data quality, reconciliation across systems, time-critical data processing, and complete traceability. This mid-level position allows you to work with increasing independence on data pipeline development while collaborating closely with senior engineers and the technical lead for guidance on complex problems.

Requirements

  • 2-5 years of experience in data engineering or analytics engineering
  • Strong proficiency in SQL and working knowledge of Python
  • Hands-on experience building data pipelines using tools like Databricks, dbt, or similar
  • Experience with AWS and its object storage
  • Understanding of data modeling concepts including dimensional modeling and fact/dimension tables
  • Experience with data quality validation and testing
  • Ability to debug and troubleshoot data pipeline issues
  • Experience with version control (Git) and basic understanding of CI/CD concepts
  • Understanding of basic data governance: access control and audit logging
  • Good problem-solving skills and ability to work with moderate independence
  • Good communication skills and willingness to ask questions when blocked
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience
  • Japanese: Business Level (Fluent, capable of handling communication with clients in Japanese).
  • English: Business level

Nice to haves

While not specifically required, tell us if you have any of the following.

  • Experience in financial services, fintech, or regulated industries
  • Basic knowledge of banking domain concepts: core banking, payments, or regulatory reporting
  • Exposure to data platforms in regulated environments (FISC Guidelines, GDPR, APPI)
  • Hands-on experience with Databricks platform or AWS native data services
  • Experience with performance tuning: partitioning strategies, file formats, query optimization
  • Experience building REST APIs with Python (FastAPI, Flask, or similar)
  • Knowledge of streaming data pipelines (Kafka, Kinesis, or similar)
  • Basic experience with Terraform
  • Experience with BI tools (QuickSight, Tableau, Looker, PowerBI)
  • Experience with data visualization and dashboard design
  • Interest in obtaining certifications (AWS Certified Data Analytics, Databricks certifications)
  • Experience in AI development and/or experience in using AI tools to improve development processes.
    • Money Forward recently announced our AI Strategy roadmap which focuses on improving AI-driven operational efficiencies, as well as integrating AI agents into our products to deliver better value to our users.

Compensation

¥5,508,000 ~ ¥9,504,000 annually.

Hiring Process

  1. 1

    CV Screening

  2. 2

    First interview

    Depending on the position, there may be a technical assignment before the interview

  3. 3

    Several interviews

    The number of interviews depends on the position.

  4. 4

    Final interview

    We may ask for a reference check before or after the interview.

Meet Money Forward's Developers

Kostas Mavrikis left the Netherlands to join Money Forward in October 2023. As the first non-Japanese speaker in the Fukuoka office, he's been taking the initiative on Money Forward's Englishnization program, as well as introducing Kotlin, Scrumban, and European-style coffee meetings to his team.

Read their story...

Related jobs

More jobs like this

We'll send you a digest of new English-friendly software developer jobs in Japan. Your email stays private, we don't share or sell it.