We Build Algorithms That Understand Market Patterns

Started in 2019 with a single research project in Buôn Ma Thuột, we've grown into a team that helps institutional traders make sense of market volatility through machine learning.

6 Years Active
14 Team Members
2021 First Vietnam Client
Trading floor with multiple screens showing real-time market data analysis

How We Started

Back in 2019, our founder was working on his PhD research about pattern recognition in financial data. What began as an academic exercise turned into something practical when a local brokerage firm asked if the algorithms could work with live market feeds.

Turns out they could. And that's how onlogicwave began—not with a grand vision, but with a working prototype and one curious client.

What We Actually Do

We build machine learning systems that process high-frequency trading data. Specifically, we train neural networks to identify patterns in price movements, order flow, and market microstructure that might indicate short-term directional bias.

Our systems don't predict the future—they calculate probabilities based on historical patterns and current market conditions. Then they present those probabilities to traders who make the final decisions.

  • Real-time market data processing and feature extraction
  • Custom neural network architectures for time-series analysis
  • Backtesting frameworks with realistic execution assumptions
  • Risk management tools that adapt to changing volatility

Who We Work With

Our typical clients are prop trading desks and institutional traders who already have technical infrastructure but need better analytical tools. They understand markets and they understand statistics—they just need algorithms that can process more data faster than humans can.

Most of our work involves customizing our base models to fit specific trading strategies and market conditions. Every trading desk has different preferences for risk, time horizons, and asset classes.

Technical Focus

We work primarily with Python for research and C++ for production systems. Our models use a mix of traditional time-series techniques (ARIMA, GARCH) and modern deep learning approaches (LSTMs, transformers, attention mechanisms).

Training happens on GPU clusters, but inference needs to run on standard trading infrastructure with millisecond latency requirements. That constraint shapes every architectural decision we make.

The Team Behind The Algorithms

We're a small group of quantitative researchers and software engineers. Most of us have backgrounds in computational finance, applied mathematics, or distributed systems.

LP

Linh Phương

Machine Learning Engineer

Handles model training infrastructure and GPU optimization. Previously worked on computer vision systems at a Hanoi tech company.

KA

Khánh An

Systems Architect

Builds the production trading systems. Specializes in low-latency C++ and making sure models can run fast enough for real trading.

HM

Hoàng Minh

Data Engineer

Manages market data pipelines and database infrastructure. Makes sure we have clean, reliable data for training and backtesting.

TQ

Trung Quân

Research Analyst

Tests new model architectures and feature engineering approaches. Runs backtests and analyzes what works and what doesn't.

Our Technical Approach

Trading systems need to be fast, accurate, and resilient. Here's how we balance those competing requirements.

Developer workspace with multiple monitors showing code and real-time data streams

Model Development Process

We start with research—lots of it. Our team analyzes market data to identify patterns that might have predictive value. Then we build simple models to test those patterns. Most ideas don't work, but that's fine. The ones that do work get refined through iterative testing.

We use a rolling window approach for validation, always testing on data the model hasn't seen before. Training happens on historical data from multiple market regimes to avoid overfitting to recent conditions.

Historical Data Analysis Out-of-Sample Testing Cross-Validation Regime Detection
Network operations center with real-time monitoring dashboards

Production Infrastructure

Research models are written in Python. Production systems are written in C++. That's just how it is when you need microsecond latency.

Our infrastructure team maintains a translation layer that converts trained models into optimized C++ code. They also handle all the monitoring, failover systems, and data pipelines that keep things running 24/7.

Low Latency Design Redundant Systems Real-Time Monitoring Automated Recovery
Risk management dashboard showing portfolio metrics and exposure analysis

Risk Management Integration

Every model we deploy includes built-in risk controls. Position limits, drawdown thresholds, correlation monitors—all the standard stuff. But we also build custom risk logic based on each client's specific requirements and market conditions.

Risk management isn't an afterthought. It's designed into the system from the beginning, and it can override any trading signal if conditions warrant.

Dynamic Position Sizing Drawdown Protection Correlation Analysis Market Impact Modeling

What Matters To Us

Trading is a tough business. Systems fail, markets change, and nothing works forever. Here's how we approach those challenges.

Data Quality

Models are only as good as the data they're trained on. We spend a lot of time cleaning, validating, and sanity-checking market data.

Scientific Method

Every hypothesis gets tested. Every model gets validated. We document what worked and what didn't for future reference.

Performance

Speed matters in high-frequency trading. We optimize relentlessly and measure everything in microseconds.

Reliability

Trading systems need to work when markets are volatile. We design for resilience and test failure scenarios extensively.

Want To Discuss A Project?

We typically work with institutional clients on custom algorithm development. If you have a specific trading strategy you want to automate or a research question about market patterns, let's talk.