Skip to main content
Schema: flow.stats Table: ez_core_metrics_hourly Type: View

Description

A comprehensive metrics table that aggregates core blockchain performance indicators on an hourly basis. This table provides key network health and performance metrics including block counts, transaction volumes, success rates, user activity, and fee collection data. The metrics are calculated using various aggregate functions (SUM, COUNT, MIN, MAX) from the core fact tables and are updated as new data arrives. This table serves as the primary source for network performance monitoring, trend analysis, and blockchain health assessment.

Key Use Cases

  • Network Performance Monitoring: Tracking blockchain throughput, transaction processing rates, and network efficiency
  • Health Assessment: Monitoring transaction success rates, failure patterns, and network reliability
  • User Activity Analysis: Understanding user engagement levels and network participation patterns
  • Economic Analysis: Tracking fee collection, revenue generation, and network economics
  • Trend Analysis: Identifying performance trends, seasonal patterns, and growth indicators
  • Alerting and Monitoring: Supporting automated monitoring systems for network health and performance

Important Relationships

  • core__fact_blocks: Source data for block-related metrics and block range analysis
  • core__fact_transactions: Source data for transaction counts, success rates, and user activity metrics
  • core__fact_events: May provide additional context for transaction analysis
  • price__ez_prices_hourly: May link to FLOW price data for fee value analysis
  • silver_stats__core_metrics_hourly: May provide additional statistical context and validation

Commonly-used Fields

  • BLOCK_TIMESTAMP_HOUR: Essential for time-series analysis and temporal data aggregation
  • TRANSACTION_COUNT: Core metric for network activity and throughput analysis
  • TRANSACTION_COUNT_SUCCESS/FAILED: Critical for network health and reliability assessment
  • UNIQUE_FROM_COUNT/PAYER_COUNT: Important for user activity and network participation analysis
  • TOTAL_FEES_NATIVE/USD: Key for network economics and revenue analysis
  • BLOCK_COUNT: Fundamental metric for blockchain throughput and performance

Columns

Column NameData TypeDescription
BLOCK_TIMESTAMP_HOURTIMESTAMP_NTZThe hour timestamp representing the aggregation period for core blockchain metrics. Data type: TIMESTAMP. This field provides the temporal reference for hourly aggregated metrics and is used for time-series analysis and chronological data organization. Used for hourly performance tracking, trend analysis, and temporal data aggregation. Example: ‘2024-01-15 14:00:00’ for the hour starting at 2 PM on January 15, 2024. Critical for time-series analytics, performance monitoring, and maintaining temporal consistency in blockchain metrics analysis.
BLOCK_NUMBER_MINNUMBERThe minimum block number within the specified hour period. Data type: INTEGER. This field represents the lowest block number processed during the hour and is used for block range analysis and data completeness verification. Used for block range tracking, data gap identification, and understanding blockchain progression within time periods. Example: 12345678 for the first block in the hour. Critical for data quality assessment, block progression monitoring, and maintaining accurate blockchain state tracking.
BLOCK_NUMBER_MAXNUMBERThe maximum block number within the specified hour period. Data type: INTEGER. This field represents the highest block number processed during the hour and is used for block range analysis and data completeness verification. Used for block range tracking, data gap identification, and understanding blockchain progression within time periods. Example: 12345999 for the last block in the hour. Critical for data quality assessment, block progression monitoring, and maintaining accurate blockchain state tracking.
BLOCK_COUNTNUMBERThe total number of blocks processed within the specified hour period. Data type: INTEGER. This field represents the count of blocks that were created and processed during the hour, providing a key metric for blockchain throughput and network activity. Used for network performance analysis, throughput monitoring, and understanding blockchain activity levels. Example: 321 for 321 blocks processed in the hour. Critical for network health monitoring, performance benchmarking, and understanding blockchain scalability and efficiency.
TRANSACTION_COUNTNUMBERThe total number of transactions processed within the specified hour period. Data type: INTEGER. This field represents the count of all transactions that were submitted and processed during the hour, regardless of success or failure status. Used for transaction volume analysis, network activity monitoring, and understanding user engagement levels. Example: 15420 for 15,420 transactions processed in the hour. Critical for network performance analysis, user activity tracking, and understanding blockchain adoption and usage patterns.
TRANSACTION_COUNT_SUCCESSNUMBERThe number of transactions that were successfully processed within the specified hour period. Data type: INTEGER. This field represents the count of transactions that completed successfully without errors or failures. Used for success rate analysis, network reliability monitoring, and understanding transaction processing efficiency. Example: 15200 for 15,200 successful transactions out of 15,420 total transactions. Critical for network health assessment, user experience analysis, and identifying potential network issues or bottlenecks.
TRANSACTION_COUNT_FAILEDNUMBERThe number of transactions that failed to process within the specified hour period. Data type: INTEGER. This field represents the count of transactions that encountered errors or failures during processing. Used for failure rate analysis, network issue identification, and understanding transaction processing reliability. Example: 220 for 220 failed transactions out of 15,420 total transactions. Critical for network health monitoring, error pattern analysis, and identifying potential network issues or user experience problems.
UNIQUE_FROM_COUNTNUMBERThe number of unique proposer addresses that submitted transactions within the specified hour period. Data type: INTEGER. This field represents the count of distinct accounts that acted as transaction proposers during the hour. Used for user activity analysis, network participation monitoring, and understanding user engagement patterns. Example: 8500 for 8,500 unique proposer addresses in the hour. Critical for understanding network decentralization, user adoption patterns, and identifying active user communities.
TOTAL_FEES_NATIVEFLOATThe total sum of all transaction fees collected within the specified hour period, denominated in the native blockchain currency (FLOW). Data type: NUMBER (decimal adjusted). This field represents the aggregate fees paid by users for transaction processing during the hour. Used for fee revenue analysis, network economics monitoring, and understanding transaction cost patterns. Example: 1250.75 for 1,250.75 FLOW in total fees collected. Critical for network economics analysis, validator reward calculations, and understanding the cost of blockchain operations.
TOTAL_FEES_USDFLOATThe total sum of all transaction fees collected within the specified hour period, converted to USD equivalent value. Data type: NUMBER (decimal adjusted). This field represents the aggregate fees paid by users for transaction processing during the hour, normalized to USD for cross-currency comparison and financial analysis. Used for fee revenue analysis, network economics monitoring, and understanding transaction cost patterns in standardized currency terms. Example: 1250.50 for $1,250.50 USD equivalent in total fees collected. Critical for network economics analysis, financial reporting, and understanding the real-world cost of blockchain operations.
EZ_CORE_METRICS_HOURLY_IDTEXTpk_id is a surrogate primary key, uniquely generated for each row in the table. Data type: STRING or INTEGER (implementation-specific). This field ensures every record is uniquely identifiable, even if the source data lacks a natural primary key. Used for efficient joins, deduplication, and as a reference in downstream models. Example: an auto-incremented integer or a UUID string. Essential for maintaining data integrity and supporting dbt tests for uniqueness.
INSERTED_TIMESTAMPTIMESTAMP_NTZThe UTC timestamp when the record was first created and inserted into this table. Data type: TIMESTAMP_NTZ. Used for ETL auditing, tracking data freshness, and identifying when data was loaded or updated in the analytics pipeline. Example: ‘2023-01-01 12:00:00’. This field is critical for monitoring data latency, troubleshooting ETL issues, and supporting recency tests in dbt.
MODIFIED_TIMESTAMPTIMESTAMP_NTZThe UTC timestamp when this record was last updated or modified by an internal ETL or dbt process. Data type: TIMESTAMP_NTZ. Used for change tracking, ETL auditing, and identifying the most recent update to a record. Example: ‘2023-01-02 15:30:00’. This field is important for troubleshooting data issues, monitoring pipeline health, and supporting recency or freshness tests in dbt.