Skip to main content
Enterprise AI Analysis: A Real-time Data Collection Approach for 6G AI-native Networks

Enterprise AI Analysis

A Real-time Data Collection Approach for 6G AI-native Networks

This paper introduces a novel parallel data collection architecture integrated directly into the bitstream processing path of wireless communication networks. By deploying specialized data probes and a comprehensive data support system on a Kubernetes-based OpenAirInterface5G (OAI) and Open5GS testbed, real-time network and system status data is captured continuously and efficiently. This approach seamlessly integrates heterogeneous data and automates support for AI model training and decision-making.

Executive Impact Summary

The Problem

The rapid evolution of 6G AI-native networks demands high-quality, real-time data for intelligent operations. Current data collection methods, like Deep Packet Inspection (DPI), are resource-intensive, introduce significant latency, and operate asynchronously with network functions, hindering the full potential of AI integration.

Our Solution

This paper introduces a novel parallel data collection architecture integrated directly into the bitstream processing path of wireless communication networks. By deploying specialized data probes and a comprehensive data support system on a Kubernetes-based OpenAirInterface5G (OAI) and Open5GS testbed, real-time network and system status data is captured continuously and efficiently. This approach seamlessly integrates heterogeneous data and automates support for AI model training and decision-making.

The Impact

The proposed architecture significantly enhances network intelligence by enabling continuous, low-latency data acquisition without incurring additional resource consumption. Experimental results demonstrate a 32.7% reduction in CPU and 31.6% in memory usage, alongside a remarkable 78.4% decrease in data collection latency compared to traditional methods. This allows for autonomous network management, real-time traffic prediction, and robust AI model performance in dynamic 6G environments.

0% CPU Resource Reduction
0% Memory Resource Reduction
0% Data Collection Latency Reduction vs. Wireshark

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

AI-Native Networks
Real-time Data Collection
System Status Monitoring
Data Support System
System Evaluation

The Vision of AI-Native 6G

The Sixth Generation (6G) communication networks are designed with Artificial Intelligence (AI) capabilities embedded at their core architecture level. This paradigm shifts from AI as an external tool to AI as an integral component, enabling autonomous learning and self-decision-making within the network. This deep integration requires real-time, structured data feeds for continuous operation and optimization.

Unlike traditional networks, AI-native networks utilize concepts like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) to deploy network entities on general hardware, often transitioning to containerized and microservices-based solutions like Kubernetes. This softwarization eliminates the gap between network function development and AI model deployment.

Proposed Parallel Data Collection Architecture

Traditional network functions (NFs) parse bitstream headers, process payloads, and then discard much of the control data. Conventional methods rely on external tools like Deep Packet Inspection (DPI) to recapture and analyze this discarded data, leading to additional overhead and asynchrony. Our proposed solution introduces a parallel architecture where a dedicated data collection thread operates simultaneously with bitstream processing, capturing crucial control information and Key Performance Indicators (KPIs) in real-time. This eliminates redundant parsing and minimizes resource consumption.

32.7% CPU Resource Reduction
78.4% Average Delay Reduction vs. Wireshark

CN Data Collection Process

Request from Consumer NF
Parse Request & Create Metadata
Invoke Sub-functions (Process)
Update Response & Append Data
Return Response
Read New Data from Buffer
Parse to JSON
Write to File

RAN Data Collection Process

Incoming Bitstream
Receive Bitstream
Parse Header (Layer by Layer)
Process Bitstream
Append Data (Header, Results, Timestamp)
Send Processed Bitstream to Destination
Read New Data from Buffer
Parse to JSON
Write to File

Comparison: Proposed vs. Traditional Data Collection

Feature Proposed Parallel Architecture Traditional DPI/Wireshark
Integration
  • Directly embedded in NF processing pipeline
  • External, out-of-band tool
Resource Overhead
  • Minimal; shares existing processing context
  • Significant CPU, memory, and time overhead
Real-time Capability
  • Synchronous, continuous data capture
  • Asynchronous, post-processing
Data Granularity
  • Configurable: metadata for non-real-time, key data for real-time functions
  • Full packet capture (often too verbose)
AI Model Support
  • Designed for direct AI model consumption, structured output
  • Requires pre-processing/feature engineering for AI
Deployment
  • Containerized, Kubernetes-friendly
  • Separate deployments, less integrated with NFV

Kubernetes & Docker for OAM

Our testbed utilizes Docker and Kubernetes as core Operation, Administration, and Maintenance (OAM) components. Kubernetes orchestrates containers, providing runtime environments, resource allocation, and lifecycle management for Network Functions (NFs). Key Linux kernel features, cgroups and namespaces, are leveraged to ensure resource isolation and provide detailed process information.

cAdvisor plays a crucial role by interacting directly with cgroups to obtain container resource usage data (CPU, memory, network) and retrieves container status through Docker and Kubernetes APIs. This comprehensive monitoring provides the necessary system-level insights for maintaining AI-native network health and performance.

Prometheus-based Data Aggregation

To overcome the challenge of scattered and diverse data across various NFs, a robust data aggregation and storage framework based on Prometheus is implemented. Prometheus periodically collects data from NF data exposure interfaces and cAdvisor. Raw data from local storage is transformed into queryable time-series data, suitable for AI model consumption.

A dedicated data service interface, powered by Prometheus Query Language (PromQL), allows AI models to retrieve aggregated, multi-level data via API. This system automates the data supply chain, making data readily available and properly formatted for continuous AI model training and decision-making, significantly streamlining AI integration.

Network Traffic Prediction Case Study

A 6G communication testbed, built on Kubernetes with Open5GS (CN) and OpenAirInterface5G (BS/UE), was used to validate the proposed data collection approach. An AI model, based on a pre-trained TabNet regression model, was deployed on a CN node to predict end-to-end traffic.

Traffic was generated between two UEs using iperf3, with random bitrates. The AI model requested real-time network and system status data from our proposed data support system every minute. The model successfully identified correlations between various KPIs and the end-to-end traffic, demonstrating its ability to support intelligent capabilities like real-time traffic prediction in AI-native networks without impacting UPF functionality. This validates the effectiveness of our data collection for autonomous network operations.

Calculate Your Potential AI ROI

Estimate the time and cost savings your enterprise could achieve by implementing real-time AI-native data collection strategies.

Annual Cost Savings $0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

A structured approach to integrating AI-native data collection into your enterprise network.

Phase 1: Discovery & Strategy

Conduct a thorough analysis of current network infrastructure and data collection practices. Define clear AI integration goals, identify key performance indicators (KPIs) for 6G AI-native networks, and outline the target architecture for real-time data acquisition and processing. This includes assessing compatibility with existing SDN/NFV deployments.

Phase 2: Data Engineering & Integration

Implement the proposed parallel data collection architecture by deploying data probes within Network Functions (NFs) across both CN and RAN. Configure the Prometheus-based data support system for aggregation and time-series storage. Develop robust data pipelines to ensure real-time data flow and quality assurance, making heterogeneous data accessible for AI models.

Phase 3: Model Development & Training

Select and customize AI models (e.g., TabNet for traffic prediction) relevant to identified network intelligence objectives. Utilize the collected real-time, structured data for continuous model training and validation. Establish a MLOps framework to manage the lifecycle of AI models, ensuring optimal performance and adaptability within the 6G environment.

Phase 4: Pilot Deployment & Iteration

Deploy the AI-driven data collection and analysis solution in a controlled pilot environment, such as a Kubernetes-based testbed with OAI/Open5GS. Monitor system performance, resource consumption, and AI model accuracy. Gather feedback, identify areas for optimization, and iterate on the architecture and models to refine effectiveness.

Phase 5: Full-Scale Integration & Optimization

Scale the AI-native data collection and analysis platform across your entire enterprise 6G network. Implement continuous learning mechanisms for AI models, ensuring they adapt to evolving network conditions and traffic patterns. Establish ongoing monitoring and maintenance protocols to sustain peak performance and leverage the full potential of AI for autonomous network management.

Ready to Build Your AI-Native Future?

Our experts are ready to help you implement a real-time data collection strategy for your 6G AI-native network. Schedule a free consultation.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking