Proof of concept

To date, we have shown the performance of several key elements of our proposed concept.

Combinatorial space

We have designed and implemented a solution for supervised learning problem for the case where both input and output data is represented by binary vectors. Patterns of bits in such vectors reflect patterns in input and output data, as well as their relations of arbitrary complexity. At the same time, we proposed a method that allows us to solve the same problems that were traditionally solved by neural networks, but at the same time free from many deficiencies of neural networks and possessing a number of surprising properties.

Context space

For the task of visual perception, the various spatial turns of a three-dimensional object are contexts. We have shown that using a architecture similar to neural network autoencoders, we can train such contexts in transformation rules. In this case, it is possible to ensure that the internal presentation of information in all contexts is the same. Training was done on handwritten symbols printed on the edges of a three-dimensional cube. We have shown that the space of contexts with high accuracy allows us to determine not only what symbols are depicted, but also the turning angles of the cube on which these symbols are applied.

Contexts and in-memory calculations

We have shown that, within the contextual approach, it is possible to implement and combine the mechanisms of formal grammars and recurrent neural networks with long short-term memory. We demonstrated this by the example of parsing arbitrary texts in Russian.