The primary objectives for the Prodigy Matching Enginer are:
Prodigy should function around the clock and be tolerant of faults.
Prodigy should horizontally scale to meet growing requirements.
Prodigy should be easily adapted to suit a variety of needs.
Prodigy should perform tasks within a reasonable timeframe.
Prodigy should not involve unnecessary third-party dependencies
The Prodigy Exchange consists of three main services, plus four support services.
The three main services that form Prodigy are the Worker, the Gateway, and the Web Service.
- Prodigy Gateway
The Gateway is the primary streaming data interface, where market and execution activity are distributed, and where order instructions can be submitted.
The Gateway utilises the FIX 5.0 protocol.
- Prodigy Web Service
The Web Service provides market control operations and querying of current market state and historical activity.
The Web Service utilises standard HTTP requests.
- Prodigy Worker
The Worker applies execution instructions and market control operations to the market state. This is where the matching engine performs its work.
The Worker is internal to Prodigy and does not have any external-facing interfaces.
- Prodigy Monitor
The Monitor performs maintenance tasks to ensure proper and speedy failover between Workers.
Prodigy utilises PostgreSQL 10 (or higher) for persistent storage of market activity, as well as managing FIX session state.
These two tasks are independent and can exist in separate instances for reliability and performance.
Prodigy utilises RabbitMQ for communication between the high-level components. It ensures requests are sent to the correct Worker and distributes market activity back to the Gateways.
Prodigy utilises Redis for coordination between Workers and caching of the market state.
Prodigy is intended for 24-hour availability, with zero down-time even in the event of system upgrades.
To achieve this, all components of Prodigy can be run in a redundant manner.
- Gateway – FIX sessions can be resumed from any instance.
- Web Service – The REST API is stateless and all instances are identical.
- Worker – Instructions will automatically be rerouted in the event of a fault.
- Monitor – Failover tasks can be performed by any instance.
- PostgreSQL – Supports mirroring for fault tolerance
- RabbitMQ – Supports clustering for fault tolerance
- Redis – Supports clustering for fault tolerance
Additionally, order operations are only confirmed once the action has been written to the database, so in the event of a widespread system failure, the possibility of data loss is minimal to non-existent.
Prodigy is intended to scale based on the workload required.
Both the Gateway and Web Services can horizontally scale and be load-balanced.
Workers function in a cluster and scale based on the number of active symbols. Inactive (no recent activity) symbols do not consume any resources. New Workers can be added or removed dynamically, with automated migration of any active symbols.
RabbitMQ and Redis both support clustering, allowing their load to be distributed across multiple machines if necessary.
Prodigy utilises PostgreSQL in a very write-heavy manner, so the benefits of replication regarding exchange throughput are minimal. However, we do not anticipate this being a problem for anything but the heaviest workloads.
Prodigy is designed to be an extensible framework for building an exchange, easing the burden on implementors.
It achieves this goal by providing three points of extensibility:
- The Data Model, which defines what properties the various elements of the exchange possess, and the events that alter them
- The Execution Engine, which defines operations on the Data Model
- The Translation Layer, which formats the Data Model into FIX messages for consumption
Additionally, a single Prodigy installation can run multiple exchanges with different models, execution engines, and translation layers in parallel. New exchanges can be created while the system is running.
Prodigy is designed to have an acceptable baseline of performance, while prioritising the previous goals: reliability, scalability, and extensibility.
Additionally, Prodigy prioritises throughput over latency, such that a single instruction can take some time to flow through the system, even while the overall number of executions per second remains high.
For these reasons, Prodigy is not intended to be a high-frequency-trading platform.
Prodigy is intended to run in as many environments as possible.
All the components of Prodigy are built on .Net Core, and are capable of running on Windows or Linux hosts.
Prodigy can be containerised through services such as Docker.
Prodigy does not require specialised hardware, and is intended to run on standard commodity infrastructure, both cloud and in-house. Prodigy does not depend on any one cloud provider.