One technique for optimizing systems is to create a regime filter. A most common example is a binary classifier that classifies the market into either bull or bear markets based on closing above or below the 200 day moving average. But, there are problems with binary classifiers and we will demonstrate why a multi-state classifier is often a more powerful approach. In fact, the multi-state classifier combined with optimizer is actually a basic but powerful form of machine learning.
There are some obvious problems with binary classification. First, assuming that the market only has 2 regimes is somewhat an arbitrary decision. Second, it is a rather crude tool. For example, we might find it reasonable to classify various variables into multiple classes such as bearish, bullish, and neutral. The multi-state classifier is a powerful technique that allows for such classifications and more. In the moving averaging example, we might take the difference between the close and the moving average and let that specify our trading style. This allows our system to trade adaptively to the current market conditions. And the possibilities are really extensive for application.
One benefit of multi-state classifiers over a linear regression approach is that it is easy to encode non-linear logic. For example, imagine the hypothetical scenario where we want to trade conservatively when the market is very bearish, aggressive when bearish, aggressive when bullish, and conservatively again when the market is very bullish. The idea in this case is that very bullish or bearish markets are higher risk. This is an example of a nonlinear matrix.
|Market Conditions||Trading Selectivity|
In other words, we’re trying to find a predictor that can positively influence or inform our trading decisions. We can create as many classes or states as we desire, i.e. we can have multiple optimal values for our indicator. However, one word of caution, if we create too many classes then we run the risk of overfitting. We can guard against the possibility of overfitting in a few ways. We can limit the number of classes that we create, and we can enforce a minimum number of trades per class; like we would with an individual system. Finally, we can use “force logic” such that the optimizer is not able to optimize each class individually but rather is only able to optimize some delta. We don’t consider such advanced possibilities any further in this example.
For this example, we will look at a long only version of Connor’s RSI(2) system applied against the ES futures contract, and will seek to classify the optimal buy and sell thresholds based on the longer term RSI. We observe that the intermediate RSI tends to stay in the overbought or oversold during extended periods of strength or weakness. We want to make it easier to buy and sell during strong periods, and we want to make it more difficult to buy and sell during weak periods. In other words, we want to become more selective when the market is persistently weak which makes sense because the risk is higher.
In this example, we’ll use an RSI(14) for our longer term RSI. We observe the RSI(14) tends to stay between 15 to 70.
However, to be sure that we capture the lower and upper extents then we’ll define our classes as: -infinity to 30, 30 to 50, and 50 to infinity. If the intermediate RSI is in the lower region then that indicates a more bearish market that should dictate more trade selectivity; on the other hand if the market is strong then we need to be more aggressive to capture the dips. We could use the optimizer to find the optimal values for these classes. However, in this example, we simply force the logic we desire and optimize by hand.
Before we get to the results, it might be illustrative to see the most pertinent EasyLanguage code. We can actually make the class assignment code rather short and elegant as long as we pay attention to the ordering of the statements:
BuyThresholdV = 30;
BuyThresholdV = IFF( RsiClassifier >= -10, 10, BuyThresholdV);
BuyThresholdV = IFF( RsiClassifier >= 30, 20, BuyThresholdV);
BuyThresholdV = IFF( RsiClassifier >= 50, 30, BuyThresholdV);
SellThresholdV = 70;
SellThresholdV = IFF( RsiClassifier >= -10, 60, SellThresholdV);
SellThresholdV = IFF( RsiClassifier >= 30, 70, SellThresholdV);
SellThresholdV = IFF( RsiClassifier >= 50, 80, SellThresholdV);
BuyCondition = RsiSignal < BuyThresholdV;
SellCondition = RsiSignal > SellThresholdV;
Finally, we compare the simple RSI(2) strategy to our hand optimized multi-state classifier.
|@ES Daily 1/2/2004-1/26/2017|
|RSI(2) Simple||RSI(2) Multi-State|
|Avg Profit Per Trade:||$304.15||$404.61|
|Number of Trades||283||236|
We introduced the concept of multi-state (or multi-class) classification by using the RSI(2) strategy as an example. We quantitatively improved the results of the RSI(2) system using only hand optimization and forced logic. Our multi-state classifier improved the net-profit and significantly improved the average profit per trade. The winning percentage was basically unchanged.
Unfortunately, we were not able to significantly reduce the drawdown with this method. This suggest that our profits may be deriving more from trading more aggressively in bull markets than avoiding the losses in bear markets. However, the fact our total trades is lower and our profits higher suggest the quality of trades overall we took were higher. That would suggest on average we did better but both systems took a single or a few big hits sometime over the entire history.
One suggestion as to why this may be the case is that the RSI itself tends to make large movements on a bar-by-bar basis which limits the degree to which we can use its value to inform our decision. Fortunately, one is not limited to using RSI, and there are many more types of market conditions that one can classify that may yield even better performance.
The author is passionate about markets. He has developed top ranked futures strategies. His core focus is (1) applying machine learning and developing systematic strategies, and (2) solving the toughest problems of discretionary trading by applying quantitative tools, machine learning, and performance discipline. You can contact the author at firstname.lastname@example.org.
Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.