How We Started
                        
                        
                           Back in 2019, our founder was working on
                           his PhD research about pattern recognition
                           in financial data. What began as an
                           academic exercise turned into something
                           practical when a local brokerage firm
                           asked if the algorithms could work with
                           live market feeds.
                        
                        
                           Turns out they could. And that's how
                           onlogicwave began—not with a grand vision,
                           but with a working prototype and one
                           curious client.
                        
                     
                     
                        
                           What We Actually Do
                        
                        
                           We build machine learning systems that
                           process high-frequency trading data.
                           Specifically, we train neural networks to
                           identify patterns in price movements,
                           order flow, and market microstructure that
                           might indicate short-term directional
                           bias.
                        
                        
                           Our systems don't predict the future—they
                           calculate probabilities based on
                           historical patterns and current market
                           conditions. Then they present those
                           probabilities to traders who make the
                           final decisions.
                        
                        
                           - Real-time market data processing and
                           feature extraction
                           
 
                           - Custom neural network architectures
                           for time-series analysis
                           
 
                           - Backtesting frameworks with realistic
                           execution assumptions
                           
 
                           - Risk management tools that adapt to
                           changing volatility
                           
 
                        
                     
                     
                        
                           Who We Work With
                        
                        
                           Our typical clients are prop trading desks
                           and institutional traders who already have
                           technical infrastructure but need better
                           analytical tools. They understand markets
                           and they understand statistics—they just
                           need algorithms that can process more data
                           faster than humans can.
                        
                        
                           Most of our work involves customizing our
                           base models to fit specific trading
                           strategies and market conditions. Every
                           trading desk has different preferences for
                           risk, time horizons, and asset classes.
                        
                     
                     
                        
                           Technical Focus
                        
                        
                           We work primarily with Python for research
                           and C++ for production systems. Our models
                           use a mix of traditional time-series
                           techniques (ARIMA, GARCH) and modern deep
                           learning approaches (LSTMs, transformers,
                           attention mechanisms).
                        
                        
                           Training happens on GPU clusters, but
                           inference needs to run on standard trading
                           infrastructure with millisecond latency
                           requirements. That constraint shapes every
                           architectural decision we make.