Hiromi Shikata, Developer in Tokyo, Japan
Hiromi is available for hire
Hire Hiromi

Hiromi Shikata

Verified Expert  in Engineering

Bio

Hiromi is a full-stack engineer and manager with 19 years of experience building scalable systems. An expert in TypeScript, Go, Python, Rust, and Clean Architecture, she brings strong skills in DevOps (GitHub Actions, auto testing), AI (OpenAI, Claude, AutoGPT), and infrastructure (AWS, GCP, Docker, Terraform). Focused on creating dynamic UIs, she leverages React, shadcn, and Tailwind. Hiromi leads teams using Agile and Lean methodologies and leverages modern tech stacks for impactful results.

Portfolio

Welfare Product Replacement for Japanese Market Startup
Go, OpenAPI, AWS Lambda, AWS RDS Aurora, MySQL, SST, React, TypeScript...
AI Product Implementation at Startup
Go, Python 3, OpenAI, OpenAI API, OpenAI o1, Anthropic, AutoGen, AWS Lambda...
Startup in Japan
Python 3, OpenAI API, Anthropic, Llama 2, Chrome Extensions, AutoGen, Agile...

Experience

  • MySQL - 18 years
  • OpenAI API - 15 years
  • RESTFul APIs - 10 years
  • Swagger - 9 years
  • TypeScript - 9 years
  • Go - 8 years
  • Clean Architecture - 8 years
  • Trunk-Based Development - 5 years

Availability

Part-time

Preferred Environment

Ubuntu, GitHub, Trunk-Based Development, Agile

The most amazing...

...accomplishment in my career has been contributing to the success of multiple startups that achieved successful exits.

Work Experience

Full-stack Engineer and Engineering Manager

2025 - PRESENT
Welfare Product Replacement for Japanese Market Startup
  • Resolved the inflexibility and cumbersome nature of the system structure, which had become a challenge when we decided to pursue an opportunity discovered during a pivot fully.
  • Rebuilt the back end, front end, and infrastructure, which had bloated to about eight times the size needed for required functionality, to optimize for the new product.
  • Leveraged AWS Lambda to replace Kubernetes, addressing burdensome maintenance efforts and costs.
  • Evaluated the existing system, which used Flutter, and decided to switch to ReactNative based on the team's skill sets and recruitment market research.
  • Hired new members to contribute immediately as effective team members, implementing an automated onboarding process since existing members were busy with other projects.
  • Grew the team through member referrals rather than direct recruitment. Over three years, 30+ contributors joined, with only two resigning—one for entrepreneurship, the other for family care. This retention rate reflects high team satisfaction.
  • Added assistants to boost our engineering hiring process when recruitment became challenging and conventional methods stalled. This proved effective, bringing in four strong candidates from over thirty applicants and enhancing the team.
  • Minimized non-implementation time to manage tight schedules, as there were not many technical challenges due to the nature of the product.
Technologies: Go, OpenAPI, AWS Lambda, AWS RDS Aurora, MySQL, SST, React, TypeScript, Writing & Editing, Jest, Shadcn, Clean Architecture, Domain-driven Design (DDD), GitHub Actions, GitHub, GitHub Issues, Docker, Docker Compose, Slack, Swagger, Kubernetes, Flutter, React Native

Full-stack Engineer and Engineering Manager

2025 - PRESENT
AI Product Implementation at Startup
  • Progressed from R&D results to product development, advancing to full-scale implementation based on business requirements.
  • Separated the system architecture into a data and process management system and an AI-focused system to avoid using type-unsafe Python in areas requiring programmatic control, ensuring more stable programming.
  • Built a serverless architecture using AWS Lambda as the execution environment to handle the simultaneous processing of thousands of requests required by the product.
  • Implemented custom queue execution to overcome scaling limitations when the number of Lambda functions executable from SQS reached its limit, even though Lambda's individual limits were not an issue.
  • Evaluated various models regularly and switched to better-performing ones as LLM options increased, with monthly improvements in AI performance and cost reductions.
  • Architected an automated, asynchronous management system to handle the increased overhead as successful recruitment grew the team to approximately 16 members.
  • Implemented remote work with flexible hours and no fixed meetings, using goal tracking and performance metrics. Streamlined engineering management to 2.5 hours daily, aiming to reduce it to under one hour.
  • Introduced a daily evaluation system that uncovered previously undetectable issues, enabling problem detection and resolution by the following day despite the fully flexible environment.
  • Implemented accurate individual engineer performance evaluations, enabling quantitative assessment for rate adjustments (increases or decreases) and team composition decisions, resulting in an estimated doubling of team cost efficiency.
Technologies: Go, Python 3, OpenAI, OpenAI API, OpenAI o1, Anthropic, AutoGen, AWS Lambda, Amazon Simple Queue Service (SQS), AWS RDS Aurora, SST, React, Shadcn, TypeScript, Clean Architecture, Agile, GitHub API, GitHub, GitHub Issues, GitHubProjects, Amazon DynamoDB, Docker Compose, Docker, Gemini, Slack, Jest, Swagger, OpenAPI, RESTFul APIs, Writing & Editing, Selenium, PostgreSQL

Full-stack Engineer and Engineering Manager

2024 - 2024
Startup in Japan
  • Built autonomous systems for task execution. LLMs struggled with multistep tasks, forgetting steps or making mistakes beyond three or four steps. Optimized prompts and task structures to fix the issues, achieving over 90% success in testing.
  • Enhanced LLM task execution stability by identifying common error patterns based on task characteristics and addressing them through task decomposition and examples.
  • Modified the system to integrate with OpenAI and two other major AI providers' APIs. Implemented multi-agent frameworks for complex tasks with autonomous correction, resulting in more versatile and reliable solutions.
  • Led a team of four to six members, initially struggling with task allocation. With minimal AI knowledge in the team, we conducted research and met tight deadlines. Implemented agile processes to enable remote collaboration across multiple time zones.
  • Enhanced client and investor demos with intuitive AI process visualizations to mitigate uncanny behavior. Introduced configurable settings for explanatory displays to optimize performance.
  • Tested computer control via the operating system, focusing on visual interpretation. Single LLMs struggled with visual tasks, and multimodal systems performed better but lacked precision. Due to low success rates, alternative methods were explored.
  • Explored retail product and origin detection techniques from five to six years ago with field specialists. While browser operations remained the primary focus, development centered on web interface control due to instability in image-based guidance.
  • Tested a Chrome OSS extension for 3-step tasks but halted progress due to the high costs of integrating it with existing solutions. Developed methods to reduce token usage while preserving core information and staying within prompt limits.
  • Achieved a 60 – 70% success rate with source code generation for automated operations. Improved to nearly 100% by implementing AI-friendly wrapper functions with programmatic internal processing to resolve a specific type of task.
  • Focused on testing new LLMs, frameworks, and products in a rapidly evolving landscape. Team development led to peak costs of $75 per hour, prompting an investigation into open-source LLMs and newer options like Grok for cost and speed optimization.
Technologies: Python 3, OpenAI API, Anthropic, Llama 2, Chrome Extensions, AutoGen, Agile, AWS Lambda, Amazon Simple Queue Service (SQS), Amazon DynamoDB, GitHub, Trunk-Based Development, Ubuntu, Remote Work, Flexible Work, Lean, GitHub Copilot Chat, GitHub Actions, Sonnet 3.5, OpenAI GPT-4 API

Experience

Streamlining Code

https://github.com/HiromiShikata/ast-to-entity-definitions
This tool allows you to effortlessly generate EntityDefinition and Entity Property Definition from your TypeScript type information. You can streamline source code generation for GraphQL files, handlers, repositories, diagrams, and other project-specific generated code.

Skills

Libraries/APIs

GitHub API, OpenAI API, React, OpenAPI

Tools

GitHub, Amazon Simple Queue Service (SQS), OpenAI o1, Shadcn, Docker Compose, Slack, NPM

Languages

TypeScript, Go, Python 3

Paradigms

Agile, Clean Architecture

Platforms

Ubuntu, AWS Lambda, Docker, Kubernetes

Storage

Amazon DynamoDB, MySQL, PostgreSQL

Frameworks

AutoGen, SST, Jest, Swagger, Selenium, Flutter, React Native

Other

Remote Work, Flexible Work, Lean, GitHubProjects, Domain-driven Design (DDD), Trunk-Based Development, Llama 2, Chrome Extensions, GitHub Copilot Chat, Sonnet 3.5, OpenAI GPT-4 API, OpenAI, Anthropic, AWS RDS Aurora, GitHub Issues, Gemini, RESTFul APIs, Writing & Editing, GitHub Actions

Collaboration That Works

How to Work with Toptal

Toptal matches you directly with global industry experts from our network in hours—not weeks or months.

1

Share your needs

Discuss your requirements and refine your scope in a call with a Toptal domain expert.
2

Choose your talent

Get a short list of expertly matched talent within 24 hours to review, interview, and choose from.
3

Start your risk-free talent trial

Work with your chosen talent on a trial basis for up to two weeks. Pay only if you decide to hire them.

Top talent is in high demand.

Start hiring