We are proud to introduce the Allora Network litepaper, which lays out Allora’s novel Inference Synthesis mechanism, as well as additional context on what problems the network aims to solve.
Recent advances in data access and computing power have enabled the first forms of machine intelligence capable of offering meaningful insights. However, the tremendous resources required have caused these solutions to be closed and siloed by industry monoliths. To achieve the full potential of machine intelligence, the data, algorithms, and participants must be maximally connected. Network solutions are needed, and the decentralized nature of blockchain technology is ideal for solving this problem.
Allora is a self-improving, decentralized machine intelligence network that surpasses the capabilities of its individual participants. Allora achieves this through two main innovations:
Unlike other decentralized AI networks, which lack the capability for context awareness, Allora is designed to evaluate and combine the predictive outputs of ML models based on both their historical performance and their relevance to current contextual conditions. In doing so, Allora ensures that the network always leverages the most appropriate models for any given situation.
Other attempts to create self-improving networked intelligence rely on a cumulative reputation system to aggregate inferences, a method that inhibits context awareness. As such, these networks tend to favor models with historically strong performance, overlooking the fact that some models perform well only under certain conditions. By dismissing models that don’t consistently perform well, these networks miss out on optimizing accuracy.
Allora’s context awareness allows it to understand that in certain conditions — and only in certain conditions — some models are worth paying attention to. This leads to context-aware machine intelligence.
The second critical hurdle to achieving decentralized machine intelligence is creating custom incentive structures that appropriately reward different actions within the network.
Therefore, workers in the network are rewarded for two primary actions:
Allora’s context awareness is facilitated by the network’s rewarding participants’ forecasts of each other’s performance under certain conditions. In many networks, worker nodes only provide an inference and nothing else.
On Allora, worker nodes have the capability of providing two outputs:
In other words, the network supports a logic whereby worker nodes are rewarded for commenting on each other’s expected performance. This works by something called Forecast-Implied Inference. This is the core mechanism that makes Allora context-aware.
For example, in a topic (or sub-network) on Allora that aims to predict the price of BTC, other workers might comment their awareness that an individual model performs worse when US equities markets are closed.
Like their counterparts in other decentralized AI networks, workers in Allora provide their inferences to the network. What sets Allora apart is the additional responsibility of the forecasting task: workers forecast the accuracy of other participants’ inferences within their specific topic (or sub-network). This dual-layer contribution significantly enriches the network’s intelligence.
The entire process of condensing inferences and forecasted losses into a single inference is referred to as Inference Synthesis. Here’s how the Inference Synthesis mechanism unfolds:
Relying solely on a model’s historical success fails to account for its contextual prowess — ignoring the fact that a model might excel in specific situations due to its nuanced understanding of particular aspects. Allora’s method overcomes this limitation, ensuring no model is overlooked simply because its strength lies in niche, context-specific scenarios.
This framework is complemented with the network’s differentiated incentive structure, which rewards workers for their inferences, and rewards reputers for their scoring of other workers’ inferences.
In the Allora Network, one cannot buy the truth. However, one can be — and should be — rewarded for reporting on the ground truth. An inference’s assigned reward should be a means of rewarding the truth, not rewarding a worker’s stake in the network. However, reputers still stake because the network must have economic security. This stake is then used for determining the ground truth, which does not require any particular insight; it simply solves the oracle problem of incentivizing reputers to relay information honestly.
This ethos shapes how contributions are valued and rewarded in the Allora Network. Because network participants adopt different roles in Allora, they are rewarded through different incentive structures:
Because of this differentiated incentive structure between workers and reputers, the network optimally combines the inferences produced by the network participants without diluting their weights by something unrelated to accuracy. This is achieved by recognizing — and rewarding — both the historical and context-dependent accuracy of each inference.
Allora’s collective intelligence will always outperform any individual contributing to the network.
The original mission in creating Allora was to commoditize the world’s intelligence. The innovations of context-aware inferences and the differentiated incentive structure address two major challenges that make that mission possible.
The network invites contributions from anyone with data or algorithms that improve the network. As such, the quest for machine intelligence is not a winner-take-all race. The same way Allora will always outperform the individual contributions to the network, so too will individual networks always pale in comparison to the step changes of decentralized AI at-large.
With the versatility and accessibility of Allora, we foresee a future where machine intelligence will eventually become fully commoditized and integrated with the economy, technology, and society. How exactly this may happen will be shaped by the community building on Allora. We are excited to be part of that journey.
To read the full litepaper, click here.