<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Systems-Engineering on Corebaseit — POS · EMV · Payments · AI</title><link>https://corebaseit.com/tags/systems-engineering/</link><description>Recent content in Systems-Engineering on Corebaseit — POS · EMV · Payments · AI</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><managingEditor>contact@corebaseit.com (Vincent Bevia)</managingEditor><webMaster>contact@corebaseit.com (Vincent Bevia)</webMaster><lastBuildDate>Sun, 19 Apr 2026 10:00:00 +0100</lastBuildDate><atom:link href="https://corebaseit.com/tags/systems-engineering/index.xml" rel="self" type="application/rss+xml"/><item><title>Edge AI: Why Intelligence Is Moving to the Boundary — and What It Takes to Get There</title><link>https://corebaseit.com/corebaseit_posts/edge-ai-intelligence-at-the-boundary/</link><pubDate>Sun, 19 Apr 2026 10:00:00 +0100</pubDate><author>contact@corebaseit.com (Vincent Bevia)</author><guid>https://corebaseit.com/corebaseit_posts/edge-ai-intelligence-at-the-boundary/</guid><description>&lt;p>There is a quiet architectural shift happening beneath the surface of the AI conversation. While the public discourse fixates on data center GPU clusters and trillion-parameter models, a different engineering problem is gaining urgency: how do you push intelligence out to the edge — to sensors, factory floors, autonomous vehicles, medical devices, and payment terminals — where latency, bandwidth, power, and privacy constraints make the cloud either impractical or unacceptable?&lt;/p>
&lt;p>Edge AI is not a future speculation. The IEEE Computer Society&amp;rsquo;s 2026 Technology Predictions report ranks it among the top technologies expected to succeed this year, noting that edge AI will &amp;ldquo;enable privacy-preserving, low-latency, energy-efficient, generative intelligence via small language models on resource-constrained devices, extending AI access to remote settings and extreme environments where continuous connectivity is not guaranteed.&amp;rdquo; That is a precise and honest framing. It also hints at how hard the engineering really is.&lt;/p>
&lt;p>After reading several recent IEEE papers covering distributed intelligence for edge networks, AI chip architectures, edge AI education, 2026 technology predictions, and system-level trustworthiness, I wanted to organize what these pieces collectively reveal about the current state of edge AI — where the real bottlenecks are, what the architecture looks like, and why trust in these systems demands more than model accuracy alone.&lt;/p>
&lt;p>&lt;img src="https://corebaseit.com/diagrams/edge_to_use.png"
loading="lazy"
alt="The Edge AI Revolution: From Cloud Clusters to Local Intelligence — architecture of autonomy, hardware constraints, and the multilayered trust model."
>&lt;/p>
&lt;hr>
&lt;h2 id="the-case-for-edge-latency-privacy-and-the-limits-of-centralization">The Case for Edge: Latency, Privacy, and the Limits of Centralization
&lt;/h2>&lt;p>The traditional cloud model — collect data at the edge, ship it to a centralized cluster, run inference or training, return the result — works well when bandwidth is cheap, latency is tolerable, and privacy is not a binding constraint. In many real-world applications, none of those conditions hold.&lt;/p>
&lt;p>An autonomous vehicle cannot wait 200 milliseconds for a cloud round trip to decide whether the object ahead is a pedestrian. A factory sensor detecting a bearing failure needs a corrective response in single-digit milliseconds, not after a data upload and cloud inference cycle. A medical wearable handling patient vitals cannot stream raw biometric data to an external server without running into regulatory and ethical walls.&lt;/p>
&lt;p>These are not edge cases (no pun intended). They are the &lt;em>default&lt;/em> operating conditions for a growing class of applications — from Industry 4.0 and smart grids to point-of-sale terminals and agricultural monitoring. The European Telecommunications Standards Institute (ETSI) has formalized this direction through the concept of zero-touch network provisioning: the idea that the infrastructure itself should be automated, self-configuring, and capable of operating with minimal or no human intervention. That vision depends entirely on intelligence at the edge.&lt;/p>
&lt;p>The architectural consequence is clear. You cannot centralize everything. But distributing intelligence across heterogeneous, resource-constrained devices introduces an entirely different class of engineering problems.&lt;/p>
&lt;hr>
&lt;h2 id="distributed-ai-and-zero-touch-provisioning-the-architecture">Distributed AI and Zero-Touch Provisioning: The Architecture
&lt;/h2>&lt;p>A research team from TU Wien, University of Oulu, University of Tartu, and the Indian Institute of Information Technology has proposed a framework combining Distributed AI (DAI) with zero-touch provisioning (ZTP) for edge networks. The architecture targets the device–edge–cloud computing continuum and rests on two pillars.&lt;/p>
&lt;p>&lt;strong>Edge intelligence for zero-touch networks.&lt;/strong> Data processing at the local level grants edge devices the ability to independently assess and respond to data without relying on centralized decision making. Distributed decision-making processes reduce latency, optimize network resources, and support real-time responsiveness. Machine learning models deployed at the edge enable predictive maintenance, anomaly detection, and dynamic load balancing — capabilities that let networks function effectively with reduced human involvement.&lt;/p>
&lt;p>&lt;strong>DAI for edge networks.&lt;/strong> DAI facilitates the deployment of AI capabilities to the periphery of network infrastructures. Edge devices equipped with AI models can make real-time decisions, process data locally, and function independently. The key advantage of DAI over centralized edge AI is structural: DAI systems are resilient, flexible, and loosely coupled by definition. They do not require all relevant data to be gathered in a single location. Instead, they work with local subsets of data, preserving privacy and reducing communication costs.&lt;/p>
&lt;p>The comparison between centralized edge AI and ZTP-enabled distributed edge AI is instructive:&lt;/p>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Parameter&lt;/th>
&lt;th>Centralized Edge AI&lt;/th>
&lt;th>Distributed Edge AI (ZTP)&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>&lt;strong>Model&lt;/strong>&lt;/td>
&lt;td>Traditional supervised learning&lt;/td>
&lt;td>Unsupervised and policy-based reinforcement learning&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>Privacy&lt;/strong>&lt;/td>
&lt;td>No privacy for handling user data&lt;/td>
&lt;td>Supports privacy and security in data handling&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>Training time&lt;/strong>&lt;/td>
&lt;td>Large-data training exponentially increases time&lt;/td>
&lt;td>Local edge training optimizes time&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>Scalability&lt;/strong>&lt;/td>
&lt;td>Not scalable&lt;/td>
&lt;td>Highly scalable&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>Heterogeneity&lt;/strong>&lt;/td>
&lt;td>Low&lt;/td>
&lt;td>High&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>Automation&lt;/strong>&lt;/td>
&lt;td>Medium&lt;/td>
&lt;td>High&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;p>The framework also introduces &lt;strong>edge resource federation&lt;/strong> — a strategy for pooling edge resources across different providers into a unified platform. When one edge device is overloaded, it can interoperably communicate with nearby underloaded devices or cloud servers to share the workload. Network function virtualization, software-defined networking, containerization, and multiaccess edge computing act as critical enablers for this federation model.&lt;/p>
&lt;p>A concrete example from the Industry 4.0 domain clarifies the stakes. When a machine sensor on a factory floor detects a possible issue, edge AI can recognize it, implement corrective measures, and reduce delay in critical decision making — all locally. A centralized or cloud-based system would require data transmission to a distant server for analysis, potentially causing delays and operational hazards in time-sensitive manufacturing environments.&lt;/p>
&lt;hr>
&lt;h2 id="the-hardware-problem-edge-ai-chips-and-energy-efficiency">The Hardware Problem: Edge AI Chips and Energy Efficiency
&lt;/h2>&lt;p>You cannot run a transformer model on a device that draws 500 milliwatts from a coin-cell battery using the same architecture that powers a data center GPU. The hardware constraints at the edge are qualitatively different, and they demand a fundamentally different approach to chip design.&lt;/p>
&lt;p>Research from Japan&amp;rsquo;s National Institute of Advanced Industrial Science and Technology (AIST) details the architecture of edge AI chips and why energy efficiency is the defining constraint for cyberphysical systems (CPS) applications such as autonomous driving and factory automation.&lt;/p>
&lt;h3 id="spatial-vs-temporal-architecture">Spatial vs. Temporal Architecture
&lt;/h3>&lt;p>The critical distinction is between the &lt;strong>temporal architecture&lt;/strong> used in GPUs and the &lt;strong>spatial architecture&lt;/strong> (dataflow processing) used in purpose-built AI chips.&lt;/p>
&lt;p>In a GPU&amp;rsquo;s temporal architecture, massive arithmetic logic units (ALUs) read from and write to a shared register file, operating in parallel. This is fast but energy-hungry, because every operation requires central memory access.&lt;/p>
&lt;p>In a spatial architecture, processing elements (PEs) are organized in tiles, each with its own ALU, register file, and control circuit. Data — activations, weights, partial sums — moves directly from one PE to another, reducing memory access energy. Filter weights in a convolutional neural network are reused by storing them in a PE&amp;rsquo;s register file and transferring partial sums between PEs, making computation significantly more energy-efficient.&lt;/p>
&lt;p>This is why spatial architectures are the foundation of edge AI chips: they trade peak throughput for dramatically better performance-per-watt.&lt;/p>
&lt;h3 id="precision-reduction-as-an-energy-multiplier">Precision Reduction as an Energy Multiplier
&lt;/h3>&lt;p>The most effective lever for improving edge AI energy efficiency is reducing computational precision. In cloud training, FP32 or FP16 is standard. For edge inference, the picture looks very different:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>INT8 quantization&lt;/strong> reduces energy to roughly 1/30th (for addition) and 1/19th (for multiplication) compared to FP32, with less than 3% accuracy degradation on image recognition tasks.&lt;/li>
&lt;li>&lt;strong>INT4&lt;/strong> formats push efficiency further for inference workloads.&lt;/li>
&lt;li>&lt;strong>Binarized Neural Networks (BNNs)&lt;/strong> replace multipliers with XNOR gates and accumulators with population counters, achieving extraordinary efficiency. Intel has demonstrated a BNN accelerator reaching &lt;strong>617 TOPS/W&lt;/strong> — orders of magnitude beyond conventional architectures.&lt;/li>
&lt;/ul>
&lt;p>The trade-off with BNNs is accuracy: on simple tasks like CIFAR-10, accuracy drops by only ~1% from FP32. On complex tasks like ImageNet, it worsens by ~16%. The practical solution is &lt;strong>mixed-precision computation&lt;/strong>, optimizing the bit width at each layer of the network to balance accuracy and efficiency.&lt;/p>
&lt;p>The 2026 IEEE Technology Predictions report reinforces this trajectory. Prediction #22 (New Processors) calls for &amp;ldquo;three orders of magnitude performance improvement and three orders of magnitude power consumption reduction&amp;rdquo; through new technologies and full 3D architectures with AI-based design strategies. Prediction #17 (In-Memory Computing) highlights analog in-memory computing as a way to bring computation directly into memory arrays, &amp;ldquo;dramatically reducing data movement, the dominant source of power and latency in today&amp;rsquo;s AI systems.&amp;rdquo;&lt;/p>
&lt;p>These are not incremental improvements. They represent a fundamental rearchitecting of the compute substrate to match the constraints of the edge.&lt;/p>
&lt;hr>
&lt;h2 id="teaching-the-edge-hardwaresoftware-co-design">Teaching the Edge: Hardware–Software Co-Design
&lt;/h2>&lt;p>One of the less discussed challenges is the talent pipeline. Edge AI requires a blend of skills — hardware awareness, software optimization, systems thinking — that most computer science curricula do not yet teach as an integrated discipline.&lt;/p>
&lt;p>A team at the University of Texas at Austin has developed an undergraduate edge AI course built around a hardware–software co-design approach. Students work directly with physical edge devices (Raspberry Pi 3B+ and Odroid MC1 clusters), performing real-time power, latency, and temperature measurements while training, deploying, and optimizing neural network models.&lt;/p>
&lt;p>The course architecture mirrors the real engineering workflow:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Train&lt;/strong> models in the cloud (on GPU clusters from the Texas Advanced Computing Center).&lt;/li>
&lt;li>&lt;strong>Deploy&lt;/strong> models on edge devices.&lt;/li>
&lt;li>&lt;strong>Measure&lt;/strong> the cyberphysical impact — power consumption, latency, thermal behavior.&lt;/li>
&lt;li>&lt;strong>Optimize&lt;/strong> using pruning and quantization.&lt;/li>
&lt;li>&lt;strong>Redeploy&lt;/strong> and remeasure until convergence.&lt;/li>
&lt;/ol>
&lt;p>Students work with both PyTorch/ONNX and TensorFlow/TensorFlow Lite stacks, gaining cross-framework fluency. The course culminates in a competition (the &amp;ldquo;Game of Compressions&amp;rdquo;) where teams optimize models for lowest latency, lowest energy, or best figure of merit (accuracy divided by the product of latency and energy).&lt;/p>
&lt;p>The results are encouraging: across 54 student teams over three semesters, the best FoM result achieved 75.8% accuracy on CIFAR-10 with an average latency of 0.72 ms per image and average energy of 8.51 mJ per image. All teams successfully deployed pruned and quantized models on edge devices.&lt;/p>
&lt;p>This kind of hands-on, co-design education is exactly what the field needs. Edge AI is not a software-only or hardware-only problem. It is a &lt;strong>systems problem&lt;/strong>, and the people building these systems need to understand both sides of the stack — and the interactions between them.&lt;/p>
&lt;hr>
&lt;h2 id="trustworthiness-the-system-level-problem-that-edge-makes-harder">Trustworthiness: The System-Level Problem That Edge Makes Harder
&lt;/h2>&lt;p>Here is where the conversation gets uncomfortable. Edge AI amplifies every dimension of the trustworthiness challenge.&lt;/p>
&lt;p>A recent IEEE paper by Vieira makes a compelling case that the AI community has a &lt;strong>trustworthy AI misconception&lt;/strong>: the assumption that if the model is fair, robust, and explainable, then the system is trustworthy. That assumption is wrong for cloud-deployed AI. It is catastrophically wrong for edge AI.&lt;/p>
&lt;h3 id="why-model-level-trust-is-insufficient">Why Model-Level Trust Is Insufficient
&lt;/h3>&lt;p>Trust is a property of the entire system, not just of one component. An AI model depends on the data it receives, the infrastructure in which it operates, and the mechanisms through which its decisions are implemented. A well-designed and explainable AI model may still produce harmful outcomes if the data pipeline is flawed, the storage system is insecure, or the decision-making process lacks human oversight.&lt;/p>
&lt;p>The real-world evidence is damning:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Amazon&amp;rsquo;s AI recruiting tool&lt;/strong> penalized applications from women despite attempts to remove gender bias — because the historical hiring data was structurally biased.&lt;/li>
&lt;li>&lt;strong>Google Health&amp;rsquo;s diabetic retinopathy screening&lt;/strong> had high diagnostic accuracy in the lab but failed in deployment because nurses had to manually upload high-quality images under strict standards that real clinics could not consistently meet.&lt;/li>
&lt;li>&lt;strong>The Epic Sepsis Model&lt;/strong> was widely adopted but poorly calibrated to individual hospital populations, generating high false-positive rates and missing true sepsis cases — overwhelming clinicians with alerts and leading to delayed treatment.&lt;/li>
&lt;li>&lt;strong>Waymo&amp;rsquo;s vehicle routing bug&lt;/strong> showed that even when the perception system correctly identified obstacles, a failure in the integration between AI perception and the route planner led to indecision, requiring remote human assistance.&lt;/li>
&lt;/ul>
&lt;p>In every case, the AI model was not the primary point of failure. The failure emerged from the system surrounding the model: data pipelines, infrastructure dependencies, human–system interaction design, or governance gaps.&lt;/p>
&lt;h3 id="the-edge-amplification-effect">The Edge Amplification Effect
&lt;/h3>&lt;p>Now consider what happens when you push these systems to the edge:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Data pipelines&lt;/strong> are more fragile — intermittent connectivity, heterogeneous sensors, constrained local storage.&lt;/li>
&lt;li>&lt;strong>Infrastructure&lt;/strong> is more diverse — different device manufacturers, operating systems, thermal environments, power profiles.&lt;/li>
&lt;li>&lt;strong>Human oversight&lt;/strong> is harder — edge systems are designed to operate autonomously, often in environments where human monitoring is minimal or absent.&lt;/li>
&lt;li>&lt;strong>Governance&lt;/strong> is more complex — edge deployments span jurisdictions, regulatory frameworks, and organizational boundaries.&lt;/li>
&lt;/ul>
&lt;p>The ZTP framework explicitly acknowledges these challenges. Cascading failures at the edge can propagate upward, and ZTP has no built-in mechanism to control such cascades. Anomaly detection in ZTP does not yet cover the full computing continuum. Security across autonomous systems running with no human intervention is inherently more difficult.&lt;/p>
&lt;p>Vieira proposes a &lt;strong>multilayered trust model&lt;/strong> that extends beyond the AI/ML component to encompass:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Data trustworthiness&lt;/strong> — validity, absence of bias, security throughout the data lifecycle.&lt;/li>
&lt;li>&lt;strong>Infrastructure trustworthiness&lt;/strong> — resilient deployment, continuous monitoring, graceful failure recovery.&lt;/li>
&lt;li>&lt;strong>Human–system trustworthiness&lt;/strong> — usability, interpretability, governance features ensuring users understand and control AI-assisted decisions.&lt;/li>
&lt;li>&lt;strong>Regulatory and ethical trustworthiness&lt;/strong> — legal compliance, transparency, accountability mechanisms.&lt;/li>
&lt;/ol>
&lt;p>For edge AI systems, all four layers must be engineered deliberately. Assuming that a technically accurate model will produce trustworthy outcomes in a distributed, heterogeneous, partially autonomous environment is a systemic risk.&lt;/p>
&lt;hr>
&lt;h2 id="what-comes-next-research-directions-and-open-problems">What Comes Next: Research Directions and Open Problems
&lt;/h2>&lt;p>The literature converges on several urgent research directions for edge AI:&lt;/p>
&lt;p>&lt;strong>Lightweight AI/ML.&lt;/strong> Resource-constrained edge nodes need algorithms that minimize both resource usage and computation time without affecting prediction accuracy. Model compression, knowledge distillation, and novel architectures designed for constrained environments remain active research areas.&lt;/p>
&lt;p>&lt;strong>Privacy-preserving intelligence.&lt;/strong> Federated learning and differential privacy techniques are essential for training models across distributed edge devices without centralizing sensitive data. The privacy challenge at the edge is not theoretical — it is a regulatory requirement in medical, financial, and personal-data domains.&lt;/p>
&lt;p>&lt;strong>Semantic interoperability.&lt;/strong> The computing continuum interconnects devices that are heterogeneous in technologies, standards, and data formats. Bridging the interoperability gap with intelligent protocols is necessary before ZTP can scale to the full continuum.&lt;/p>
&lt;p>&lt;strong>Explainability and causality.&lt;/strong> ZTP will autonomously select configuration states for large distributed systems. Developing sidecar tools that can explain &lt;em>why&lt;/em> a specific configuration was selected — using causal reasoning rather than post-hoc correlation — is essential for auditability and trust.&lt;/p>
&lt;p>&lt;strong>Generative AI at the edge.&lt;/strong> The 2026 predictions identify edge deployment of small language models as a near-term reality. But tracing the accuracy of generative AI decisions on the fly, and identifying which computing nodes can actually perform generative inference within the continuum, remain open challenges.&lt;/p>
&lt;p>&lt;strong>System-level assurance.&lt;/strong> Moving beyond model-centric assessment to develop evaluation methodologies encompassing data integrity, infrastructure dependability, human–AI interaction, and governance transparency. This includes trustworthiness maturity models, assurance case approaches adapted from safety-critical domains, and risk propagation modeling across subsystems.&lt;/p>
&lt;hr>
&lt;h2 id="the-bottom-line">The Bottom Line
&lt;/h2>&lt;p>Edge AI is not about shrinking a cloud model to fit on a small device. It is about redesigning the entire stack — hardware, software, networking, governance, and trust — for an operating environment where latency is measured in milliseconds, power in milliwatts, connectivity in intermittent bursts, and human oversight in occasional remote glances.&lt;/p>
&lt;p>The hardware is evolving: spatial architectures, mixed-precision compute, BNNs, in-memory computing, and new processor paradigms are closing the efficiency gap. The software is adapting: ZTP, edge federation, DAI, and federated learning are providing the distributed intelligence frameworks. The educational pipeline is catching up: co-design curricula are producing engineers who understand both sides of the stack.&lt;/p>
&lt;p>But the trust problem remains the hardest. Every system-level failure documented in cloud-deployed AI — biased data, fragile infrastructure, inadequate human oversight, governance gaps — is amplified at the edge. Building trustworthy edge AI systems requires treating trust as a multilayered, system-wide engineering discipline, not a model-level checkbox.&lt;/p>
&lt;p>The edge is where AI meets the physical world. Getting it right matters more than getting it fast.&lt;/p>
&lt;p>&lt;img src="https://corebaseit.com/diagrams/jensen.png"
loading="lazy"
alt="The Edge AI Revolution: From Cloud Clusters to Local Intelligence — architecture of autonomy, hardware constraints, and the multilayered trust model."
>&lt;/p>
&lt;hr>
&lt;h2 id="references">References
&lt;/h2>&lt;ol>
&lt;li>A. Hazra, A. Morichetta, I. Murturi, L. Lovén, C. K. Dehury, V. C. Pujol, P. K. Donta, and S. Dustdar, &amp;ldquo;Distributed AI in Zero-Touch Provisioning for Edge Networks: Challenges and Research Directions,&amp;rdquo; &lt;em>IEEE Computer&lt;/em>, vol. 57, no. 3, pp. 69–78, Mar. 2024, doi: 10.1109/MC.2023.3334913.&lt;/li>
&lt;li>H. Fuketa and K. Uchiyama, &amp;ldquo;Edge Artificial Intelligence Chips for the Cyberphysical Systems Era,&amp;rdquo; &lt;em>IEEE Computer&lt;/em>, vol. 54, no. 1, pp. 84–88, Jan. 2021, doi: 10.1109/MC.2020.3034951.&lt;/li>
&lt;li>A.-J. Farcas and R. Marculescu, &amp;ldquo;Teaching Edge AI at the Undergraduate Level: A Hardware–Software Co-Design Approach,&amp;rdquo; &lt;em>IEEE Computer&lt;/em>, vol. 56, no. 11, pp. 30–38, Nov. 2023, doi: 10.1109/MC.2023.3295755.&lt;/li>
&lt;li>C. Ebert, I. El Hajj, E. Frachtenberg, A. Lysko, D. Milojicic, R. Saint Nom, S. Sinha, and J. Toro, &amp;ldquo;Technology Predictions 2026,&amp;rdquo; &lt;em>IEEE Computer&lt;/em>, vol. 59, no. 4, pp. 172–181, Apr. 2026, doi: 10.1109/MC.2026.3660461.&lt;/li>
&lt;li>M. Vieira, &amp;ldquo;Why We Should Trust Systems, Not Just Their AI/ML Components,&amp;rdquo; &lt;em>IEEE Computer&lt;/em>, vol. 58, no. 11, pp. 84–94, Nov. 2025, doi: 10.1109/MC.2025.3604335.&lt;/li>
&lt;li>V. Sze, Y. Chen, T. Yang, and J. S. Emer, &amp;ldquo;Efficient Processing of Deep Neural Networks: A Tutorial and Survey,&amp;rdquo; &lt;em>Proceedings of the IEEE&lt;/em>, vol. 105, no. 12, pp. 2297–2329, 2017, doi: 10.1109/JPROC.2017.2761740.&lt;/li>
&lt;li>J. Gallego-Madrid, R. Sanchez-Iborra, P. M. Ruiz, and A. F. Skarmeta, &amp;ldquo;Machine Learning-Based Zero-Touch Network and Service Management: A Survey,&amp;rdquo; &lt;em>Digital Communications and Networks&lt;/em>, vol. 8, no. 2, pp. 105–123, Apr. 2022, doi: 10.1016/j.dcan.2021.09.001.&lt;/li>
&lt;li>S. Han, H. Mao, and W. J. Dally, &amp;ldquo;Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding,&amp;rdquo; arXiv:1510.00149, Oct. 2015.&lt;/li>
&lt;/ol></description></item></channel></rss>