Data assimilation

3-1

As a first step, towards creating high quality forecasts, we collect relevant information and data from a variety of quality sources. The data acquired represents a 5-10 year period containing many macroeconomic scenarios and situations.

Today, we have approximately one million data points, from roughly 700 unique sources, that are updated daily. Even though the bulk of the purchased and acquired data is financially related, approximately 35% is non-financial macro information. This is on account of several studies showing that the stock market is affected by several factors beyond mere financial information.

Once the data is assimilated from our sources, it undergoes a process of refinement through:

  • sorting and classification by type, date, and a range of other attributes for the purpose of unified data management.
  • selection, whereby all data is reviewed, and irrelevant and incorrect data is removed. This ensures high quality.
  • consolidation whereby all data is sorted and classified into hundreds of different tables

The data preparation step requires large amount of data management and calculations, and is of crucial importance in order to provide high quality forecasts.

Every day, once the market closes, our various algorithms are re-run and the core of our system is updated to reflect the newly received data points and to take account of the recent influential events. Our system is retrained, learning and adjusting itself with regards to the latest up- or downtrend in the short, medium and long terms. The updated core is then used to generate new forecasts using a number of different AI engines. A higher level AI engine, which we call our “consensus engine”, will judge the relevance of the various primary AI engines, refining forecasts, generating and publishing the final forecasts for the coming period.

Thus, the models based on our AI are entirely empirical. They are based on historical data, without the intervention of human expertise, using both conventional (classical technical analysis) and proprietary models as a basis. The entire system is self-learning and continuously evolves as new events are introduced to it.