In an era plagued with uncertainty and constant volatility, the reliance on advanced trading technology to remain profitable has never been greater. Many trading firms risk being left behind, along with their trading margins, if they are unable to stay competitive during record breaking volumes and market turbulence.


As we’ve written previously, if FPGA is becoming the standard for low latency trading — then why are so many market participants still reliant on software trading stacks? What’s holding them back from making the move to FPGA?


Here are the 5 questions I get asked most often about trading with FPGAs from firms that are hesitant about making the jump and why I think delaying the inevitable, while the rest of the market moves forward, will be a costly error in the long run.



Q1. “I know firms are now trading in ASICs, if I’m going to move from software, why even bother with FPGA?”


If I’m having a drink at the bar with industry colleagues, the conversation usually trends toward using ASICs for trading. While this is a relatively new trend, not all of the challenges are apparent from the outset and I feel like it’s worth sharing my thoughts on this topic.


The first thing that comes to mind is the balance between cost and flexibility. These are inherently linked when considering any technology. There is no doubt that if money is not an issue, then ASICs are the way to go — since you can optimize your strategy and run it in Gigahertz compared to the Megahertz frequency limitation that you have with FPGAs. Unfortunately for most of us, access to the multi-million dollar budgets needed for designing, testing, and deploying ASICs, in addition to the constant infrastructure and technology upgrades to make the most of this investment, is not a feasible option.


Additionally, in order for the latency benefits of ASICs to be worthwhile, a firm’s entire end-to-end infrastructure needs to be optimized to take advantage of every picosecond. Using ASICs to trade with signals coming from a remote exchange requires the fastest link to that exchange. This can mean using either a proprietary link, or if available, a third party, low latency wireless connection — both of which come with the associated costs of premium network services. When trading on a local venue, each ASIC needs to be connected directly via it’s own handoff in order to avoid passing through a switch, which would negate any latency advantage. This increases overall connectivity costs. In short, the costs associated with an ASIC deployment doesn’t stop at fabrication and becomes a constant drive to squeeze every picosecond from the infrastructure.


This brings us to flexibility. A firm looking at a fully customized ASIC must consider that fabricating an ASIC is a one-time operation. There is a risk that a bug could occur or your strategy needs changing, therefore you would incur the overhead of refabrication for your design, which would then need to be retested and redeployed. Each repetition of this process delays the overall time it takes to get to production and means less time devoted to driving forward other, more impactful initiatives.



Q2. “There are firms now trading in under 50 nanoseconds, why are the latency numbers much higher for most vendor FPGA solutions?”


It’s true that a complete tick-to-trade in 50ns is impressive and quite difficult to achieve, but what do you get for that 50ns? What type of strategy does that latency support? My guess is a pretty simple one — the physics of 10Gbs serialization makes it impossible to add any real intelligence.


In general, the simpler the strategy, the higher the competition and the more “winner-take-all” it becomes. Earlier this year, the Wall Street Journal reported on a study by the Financial Conduct Authority (FCA), the U.K.’s financial regulator, which found that, “More than 80% of races in FTSE 100 stocks were won by the same half-dozen firms”. To further complicate things, in order for the cost to be worth the investment, firms compete within a very small subset of the market and are limited to only these types trading strategies. With little insight into where your latency sits compared to the competition, it’s an extremely high risk investment without a clear opportunity for success.


Not mentioned in the WSJ article is the level of investment needed by these firms to not only get in the game, but to stay there. As mentioned previously, getting a sub-50 ns solution is only a fraction of the overall architecture. Your entire end-to-end infrastructure, including dedicated market data and execution, connectivity, your switches (Layer 1), your network connections to other data centers — needs to be top-notch in order to truly benefit from such an investment.


If today you are using software for your trading strategies, a first step would be to gradually transition to FPGA-based trading while keeping the majority of your software-based functionally, and improving latency. My advice would be to optimize your existing strategies, against your known competition, with the key goal of being faster than the other market participants at the same level.



Q3. “Ok, but I only have software/C++ developers, can I even get into this FPGA game?”


To put it simply — the answer is yes, absolutely.


If your firm has reached the limit for what an end-to-end, software-based solution can achieve, then the next logical step is to accelerate parts of the processing that are known to be slow when running on CPUs – like market data or execution processing. FPGA is perfect for this.


Upgrading your technology doesn’t have to be overwhelming, high risk, or require millions of dollars of upfront investment. The goal is to be faster than your competitors for a given strategy. By integrating FPGA technology incrementally you can start by using a solution that leverages the FPGA for the core processing while maintaining software interfaces to integrate with your trading strategies. FPGAs are really good at crunching market data or formatting orders, so it can provide a latency boost within a few weeks at relatively low cost and with little to no need for in-house FPGA development.


Once this is done, the next step is to “dip a toe in the water” by moving simple, yet latency critical logic into the FPGA — it could be a tick-to-cancel or a simple hedging strategy. Here again, you can leverage your C++ talents by using technology like High Level Synthesis (HLS), from Intel or Xilinx, which when working within defined boundaries, is very efficient at enabling software code to be turned into hardware logic.


The main takeaway is that latency competition exists at different levels and if you’re able to react even 100 nanoseconds faster than your competitors, it could be the difference between winning the race and getting beat to the finish line.



Q4. “I’ve heard it takes many years to develop in FPGA, will the market have changed by the time I’m up and running?” 


When asked this question, my first response is always, “It depends”, which I know isn’t a satisfying answer. The truth is your go-to-market timeline is fully dependent on where you set your expectations. If you still have a desire to trade at sub-50ns, despite the competition in the market, then you’re looking at a multi-year project before you can go to production. This of course depends on the ability to hire (and retain!) a team of top-notch hardware developers, constantly re-evaluate and deploy the latest hardware, invest in evolving trading connectivity solutions and slowly lose your sanity as you realize that all the other firms in this range have been improving their architectures while you were playing catch-up.


However, I believe that there is a wealth of untapped value between 50ns and 5µs — where much of the industry is missing a potential opportunity.


This opportunity arises because FPGA solutions don’t have to be complicated to integrate into an existing infrastructure. For example, you can swap out your software feed handler for a full hardware one in a matter of weeks, with software-only development. This will immediately yield gains in overall performance and better handling of extremely volatile markets. Following an incremental, step-by-step approach — you can continue to improve your overall infrastructure and take advantage of those benefits along the way.



Q5. “Given all of this, how can Enyx help me move from software to FPGA trading?”


At Enyx, we understand that upgrading your firm’s trading infrastructure is a journey. Introducing any new technology, including FPGA, is something that doesn’t happen all at once and requires commitment, vision and dedication to complete. Each firm will be at varying stages along this path, with specific goals, and a different set of needs in order to achieve success.



Our aim is to provide a technology stack where you can easily build upon previous integrations as your strategy advances. To accomplish this, we specifically designed our product offering to help firms at every step of the FPGA adoption process: 


  • Acceleration of market data processing with nxFeed
  • Deployment of trading logic completely within the FPGA with nxAccess
  • Development of a full custom FPGA application with nxFramework


We believe that our value doesn’t stop at providing a comprehensive set of technologies, but also helping the firms that we work with be successful in implementing them.




Regardless of where a firm is on their journey to FPGA deployment, we adapt to each of our clients needs and expertise by offering three distinct integration packages:


  • Do it yourself:

    Tailored for firms with in-house hardware expertise, this approach includes our standard support package to ensure that we can help with any questions that arise during development and integration.


  • Jump start:

    This approach is designed to help our clients reduce their time-to-market by including the added benefit of professional services. Enyx can develop hardware and software logic according to a firm’s specifications using the FPGA development language of choice (HLS or HDL). The client can then maintain and modify the application as needed. To ensure a smooth transition, Enyx will hand over the deliverables with documentation and reference designs.


  • Fully maintained:

    This end-to-end approach was designed for firms with little-to-no hardware expertise that wish to adopt FPGA technology without having to develop. Enyx will provide professional services to develop specifications, design and maintain any customer logic.


In terms of FPGA development, we try to be as flexible as possible — offering from fully maintained options to simple support for a firm’s own FPGA development and implementation team.


The goal of Enyx is to extend our experience to your firm and make it as easy as possible for the onboarding of FPGA technology while also staying competitive in today’s market. I believe this technology can provide real opportunities, both in the short and long term.


If you’re able to quickly gear your trading infrastructure for faster reaction times, increase the capacity to support higher volatility and get the most accurate view of the market possible  — you are able to make sound investments that open up opportunities.




Originally posted on LinkedIn


<< back to news list