Architecture Components

Modes of Operation
IntelliX oracles can operate in two distinct modes:
Push Mode: In push mode, IntelliX oracles periodically pull data and publish the results to a smart contract at a specified time.
Pull Mode: In pull mode, the request for data fetching is triggered by a smart contract. This mode is useful for situations where data needs to be pulled on demand.
Components
IntelliX network is divided into two main components:
Core Chain: The foundational layer responsible for maintaining the ledger, registries, and implementing core functionalities such as slashing and data aggregation.
Chain Clients: The interface layer that interacts with various data sources and blockchains to collect, process, and disseminate data.
Core Chain
This section provides a detailed overview of the essential components that power the IntelliX network.
Data Sources Registry
The Data Sources Registry maintains a comprehensive list of all active data sources within the IntelliX network. A data source refers to any external origin of information that can be pulled by the IntelliX network. Each data source can include a set of structured (i.e price feeds) or unstructured data pointers (i.e legal documents) that needs to be processed and published to an underlying blockchain. Each data source in the registry can also have an associated workflow definition and an aggregation function. The workflow defines how the data is processed, guiding the Data Processor Nodes in extracting relevant insights. The aggregation function determines how the processed data is combined and validated by the Aggregator Nodes, ensuring that the final data is accurate and reliable.
Data Processors Registry
The Data Processor Registry keeps detailed records of all Data Processor Nodes participating in the IntelliX network. Data Processors are responsible for fetching, analyzing, and transforming unstructured data from the data sources into structured formats that can be utilized by smart contracts.This registry monitors the performance, reliability scores, and status changes of each Data Processor Node. By maintaining an accurate and up-to-date registry, IntelliX ensures that only trusted and reliable processors contribute to the data aggregation process.
For example, in the case of price feeds, the registry would include the URL for fetching the data, the frequency for fetching and publishing, and a JSONPath expression to extract the relevant data from the original payload.
For unstructured data, such as legal documents, the data source would consist of a set of document pointers, and the workflow would be represented by an agentic workflow definition that dictates how to extract structured data from those documents.
Observers Registry
The Observer Registry manages the list of Observer Nodes within the IntelliX network. Observer Nodes are responsible for monitoring specific smart contracts for data requests. These nodes track when a smart contract requires off-chain data and trigger the appropriate workflows within the IntelliX network.
Aggregators Registry
The Aggregator Registry lists all the Aggregator Nodes responsible for reconciling data from various Data Processor Nodes within the IntelliX network. This registry monitors the activity and performance of Aggregator Nodes, ensuring they adhere to the required standards for data reconciliation. Aggregators use deterministic algorithms to score and reconcile data, considering factors like the confidence levels and reliability scores of the processors.
Slashing
The Slashing component implements penalties for incorrect data reporting, misbehavior, or downtime among Data Processor and Aggregator Nodes within the IntelliX network. By enforcing these penalties, the Slashing component ensures data integrity and reliability, encouraging all participants to adhere to the high standards of data accuracy and performance required by IntelliX.
Data Keeper
The Data Keeper is responsible for managing and orchestrating various types of transactions within the IntelliX network.
Data Request Transactions: These transactions are generated when specific contracts or applications within the IntelliX ecosystem require updated data. For example, a smart contract might request the latest market data or a compliance check before executing a transaction.
Data Feed Transactions: Data Feed Transactions involve the submission of individual pieces of processed data from the Data Processor Nodes. These feeds consist of structured data, tailored to meet the specific needs of smart contracts. The Data Feeds are stored on the IntelliX chain.
Aggregated Data Feed Transactions: Aggregated Data Feeds are the result of IntelliX’s data aggregation process, where structured data from multiple processors is combined into a single, validated data point. This aggregation reconciles discrepancies and mitigates the impact of errors, ensuring that the final data is accurate and reliable. The Aggregated Data is stored on the IntelliX chain.
Chain Clients
The network comprises three main types of clients: Data Processor Client, Aggregator Client and Observer Client.
Data Processor Clients
Data Processor Cients are responsible for the core tasks of the IntelliX oracle network. These nodes fetch data from various off-chain sources, process it, and transform it into structured data that can be utilized by smart contracts. Each Data Processor Client hosts the execution of multiple workflows. A workflow is associated with a specific data source.
Aggregator Clients
Aggregator Clients collect structured data from multiple Data Processor Clients and perform an aggregation process to ensure that all data is consistent and accurate.Aggregator Clients run aggregation functions that are responsible for scoring each piece of data. These functions assess the quality and consistency of the data, ensuring that any discrepancies are resolved before the data is finalized and published.
To maintain the network's integrity, Aggregator Nodes utilize Pell's DVS to penalize misbehaving Data Processor Nodes. If a node is found to have provided false or manipulated data, it will be subject to slashing, where its stake is reduced as a penalty.
To further secure the aggregation process, a Threshold Signature Scheme (TSS) is employed at the aggregator level. TSS enhances the security and reliability of data aggregation by requiring multiple aggregators to collaborate in generating a valid signature for the aggregated data. This decentralized approach prevents any single aggregator from having full control over the signing process. Each aggregator computes a partial signature of the aggregated data using their private key share. These partial signatures are then combined to form a final signature that can be verified by any participant using the public key. This method mitigates the risk of malicious actors and strengthens the integrity and trustworthiness of the data, as the aggregated value must be agreed upon by multiple independent parties before it is published on the blockchain.
Observer Clients
Observer Clients are designed to monitor smart contracts on the blockchain for specific requests that require execution within the IntelliX network. When an Observer Node detects such a request, it triggers the appropriate workflow within the IntelliX network to fulfill the request.
Last updated