Mathematical Machines and Systems. 2026 #1

ABSTRACTS


COMPUTER SYSTEMS

UDC 623.764

Sapaty P.S. Spatial management of multidimensional international world. Mathematical machines and systems. 2026. N 1. P. 3–14.

The concept of «multidimensional international world» refers to understanding the world through its multiple dimensions beyond traditional economic or political measures, by fostering cross-cultural collaboration and balancing global integration with local needs. The paper first analyzes the multidimensional quality of human society, including international relations, and reviews publications on different world dimensions like political, economic, security, legal, cultural, social, and technological. It briefs the developed high-level Spatial Grasp Model and Technology (SGT) which can investigate and manage complex systems with a holistic spatial approach effectively covering different physical and virtual dimensions and their integration. The paper describes the basics of a multidimensional management system that is currently under development and provides examples of practical solutions in different dimensions and their combinations in Spatial Grasp Language (SGL), the key element of SGT, with networking representations of the solutions. These include interdimensional influence and optimization, finding multidimensional communities, showing multidimensional recovery after disasters, finding dangerous multidimensional groupings, isolated multidimensional components, as well as interaction of physical and virtual dimensions. Using the wavelike spatial quality of SGL, the results of multidimensional operations obtained in one dimension are then explicitly delivered to other dimensions, or the whole process is inseparably propagating between dimensions as a whole. Different versions of SGT have been tested on numerous applications, and its latest version, especially suitable for multidimensional management, can be quickly implemented on any existing platform and integrated with advanced communication systems. Figs.: 15. Refs.: 46 titles.


UDC 004.056.5

The concept of the Industrial Internet of Things (IIoT) was first mentioned in 2011 as part of Industry 4.0 at an international exhibition in Hannover. It was the result of the work of German scientists who combined the use of a large number of sensors and controllers connected to a network to achieve a specific production result. This concept combines the power of a global information network with traditional industry. Modern manufacturing enterprises are increasingly integrating digital technologies, transforming into complex systems that combine physical and cybernetic components. This increases production efficiency but at the same time creates new risks because IIoT systems are much more vulnerable to cyberattacks than conventional computer networks.  This is due to the following factors: complexity and scale, diversity of protocols, outdated equipment, insufficient attention to security, the lack of qualified specialists, and the absence of a single security standard. The consequences of cyberattacks on IIoT systems can be catastrophic and result in production stoppages, equipment damage, leaks of confidential information about production processes, technologies, etc., and reputational damage. To achieve their goals, attackers use the following methods: password guessing attacks, man-in-the-middle attacks, identifier spoofing, use of unprotected ports and protocols, search for software vulnerabilities, use of computer viruses, and denial-of-service attacks. Targeted and distributed attacks can render the IIoT system inaccessible, which in turn will lead to production stoppages. A comprehensive approach to protecting IIoT systems is proposed and its measures are described. A general architecture for protecting IIoT systems is presented, combining three hardware protection methods (PUF, TRM, HSM) to create multi-layered data protection. Each of these methods plays a unique role, complementing each other and ensuring a high level of cybersecurity. Refs.: 5 titles.


       
      INFORMATION AND TELECOMMUNICATION TECHNOLOGY

 
UDC 004.4`2

A significant slowdown in processor performance growth is associated with approaching the physical limits of reducing transistor size. This has led to the situation where traditional ways of increasing performance (increasing the clock frequency or the number of cores) no longer fully deliver performance gains. In modern processors, SIMD (Single Instruction, Multiple Data) instruction-set architectures play an important role in increasing computing performance. They allow a processor to perform one operation on multiple data streams simultaneously on a single core. In such a situation, parallel processing at the data level, implemented using SIMD instructions, becomes key. Directing the compilation process with special directives, options, and proper data organization makes it possible to obtain an optimized, vectorized code without explicitly using SIMD instructions in the program source code. However, the effectiveness of such mechanisms largely depends on a number of factors: the processor architectural features, the compiler version, the source code quality, and the optimizer state. The article analyzes some existing methods and tools for vectorizing code using SIMD instructions and proposes an approach to automating vectorization by inserting labels into source code, detecting section boundaries in compiled code, analyzing instructions within sections, and generating a report with a link to the source code. A comparative analysis of existing vectorization tools and the developed prototype is conducted. The proposed approach allows for automating the verification of code vectorization with line-level accuracy and works even with aggressive optimization, including function embedding. Таbl.: 1. Figs.: 6. Refs.: 12 titles.


UDC 004.056:004.7

The article proposes an approach to continuous access evaluation in Zero Trust Access Management based on security event signals and dynamic management of active user sessions. Unlike traditional authorization models, which make decisions once at login, access is treated as a process that is continuously adjusted throughout the session lifecycle, taking into account changes in context, behavior, and risk level. The proposed architecture integrates trust evaluation, decision-making, and access policy enforcement mechanisms with sources of event telemetry — specifically SIEM and UEBA systems — into a unified, controlled loop. A key feature of this approach is event-driven session management, where asynchronous security events trigger graduated system responses — preserving access, enforcing additional verification, or blocking — without necessarily forcing session termination. This enables the alignment of security requirements with the continuity of business processes. Special attention is given to the temporal correctness of decision-making, ensuring that critical operations are not executed with excessive privileges after the detection of risky events. To quantitatively assess the effectiveness of the proposed loop, some metrics have been introduced for response time to security events and the proportion of requests processed based on outdated trust assessments. Experimental evaluation has been conducted in a simulated corporate environment supporting multi-factor authentication, event-driven trust updates, and centralized event analysis. The obtained results demonstrate a significant reduction in response time, a decrease in the number of inconsistent access decisions, and improved correctness in executing critical operations compared to a baseline model with periodic re-evaluation. This confirms the practical applicability of continuous access evaluation as an engineering mechanism in modern Zero Trust architectures. Таbl.: 3. Figs.: 4. Refs.: 19 titles.


UDC 004.67

The paper proposes an approach to reducing the entropy of management information at a gas production enterprise under conditions of increasing complexity of production processes, information uncertainty, and limited resources. This issue is particularly relevant for modern gas production enterprises in Ukraine, which are exposed to hostile attacks. At the same time, such enterprises belong to critical infrastructure facilities and are potentially hazardous objects, which should be taken into account when developing management models sensitive to information entropy. It is proposed to consider management information a dynamic system whose entropy changes over time due to data incompleteness, inconsistencies between information sources, and delays in information acquisition. A formalized approach to assessing the entropy of management information support is proposed, based on the analysis of database structure, the timeliness of information records, and the correspondence of information to the current state of the production process. The feasibility of integrating entropy-based criteria into the management procedures of a gas production enterprise is substantiated. It is shown that minimizing information entropy makes it possible to reduce managerial uncertainty, improve the justification of decisions, and ensure a timely transition from abnormal operating modes to normal ones without the development of crisis or emergency situations. Particular attention is paid to the temporal characteristics of information and their impact on entropy level. The proposed approach can be applied in the development of decision support systems, information and analytical systems, and situational control centers for gas production enterprises. The obtained results form a methodological basis for improving the efficiency of production process management and reducing risks in the gas production sector. Таbl.: 1. Figs.: 3. Refs.: 12 titles.


                                    
                              SIMULATION AND MANAGEMENT

UDC 504.3.054

In this paper, a variational data assimilation algorithm within the radionuclide transport model has been developed, which enables the identification of a distributed source of radioactivity following atmospheric fallout on the sea surface. The algorithm involves the solution of the adjoint equations of a three-dimensional model of radionuclide dispersion in the marine environment, THREETOX, to evaluate the elements of the source-receptor matrix (SRM).  The number of adjoint THREETOX code integrations required for the full SRM calculation is equal to the number of measurements. The quadratic cost function to be minimized in the algorithm characterizes the deviation of simulated and observed concentrations, together with the difference between the first guess deposition field and the estimated solution. The covariance matrix of model errors, entering the cost function, has been parameterized using analytical expressions. The cost function has been minimized analytically, but it does not guarantee the positivity of the solution. However, the solution remained positive in numerical experiments in this work. The method has been tested against artificial measurements of caesium-137 concentrations, calculated for the problem of radionuclide dispersion in the Black Sea following the Chernobyl accident. The «true» deposition field has been considered equal to the results of interpolation of measurements in the upper water layer, collected in June 1986, on the model grid and normalized in such a way that total deposition on the surface of the Black Sea equaled 2PBq. Since the fields of currents in the Black Sea for 1986 were not available, the climatological monthly averaged fields of currents for the period 2005–2015 have been used in this study. The first guess estimation of the deposition density field has been set to a constant value at the surface of the Black Sea. The estimated deposition field by data assimilation closely matched the true field. The results of the presented preliminary testing of the algorithm have shown its potential for use in practical applications. Figs.: 6. Refs.: 23 titles.


UDC 004.94

The article considers a fundamental task of ensuring reliable and reproducible construction of multilayer model synthesis algorithms (MSA) intended for use in monitoring software agents as part of multi-agent systems (MAS). After successful training of multilayer model ensembles and other complex MSAs that demonstrate high accuracy in a research environment, a challenge arises related to their portable and deterministic deployment. To solve this problem, a DAG-oriented architecture is proposed, where the entire logic of the MSA execution is represented as a single, self-sufficient, and easily interpreted directed acyclic graph (DAG). A set of typed graph nodes has been developed that allows for transparent representation of complex operations such as increasing data homogeneity and multilayer recirculation, breaking them into logical and understandable steps. The process of transforming the trained architecture into a portable DAG containing all dependencies between components, paths to artifacts, and aggregation parameters is investigated. It is demonstrated that the proposed approach, due to clearly defined rules of data exchange between nodes and a fixed execution order guaranteed by topological sorting, ensures full compliance of the execution results with those obtained during training. It is established that such an architecture creates a reliable bridge between the research environment and practical application. It provides portability, transparency (thanks to tracing tools), and verifiability (due to the possibility of evaluating intermediate nodes) of complex methods of MSA construction. In conclusion, the developed approach is important for building reliable software agents capable of stable and predictable functioning in real-world conditions. Tаbl.: 1. Figs.: 4. Refs.: 13 titles.


UDC 621.039.58:004.89:004.94

This article presents conceptual approaches to the development of a digital twin of a complex engineered facility — the New Safe Confinement, the Shelter Object (NSC-SO), which plays a pivotal role in the nuclear and radiation safety framework within the Chornobyl Exclusion Zone. Special attention is paid to the input data sources, which originate from an extensive sensor infrastructure that captures temperature, neutron flux, gamma radiation, and other parameters monitored within the NSC–SO. The article emphasizes the problem of incomplete or noisy data, which are typical of operational systems, and proposes algorithmic solutions for their processing and reconstruction. The research investigates classical statistical approaches, including regression analysis, moving averages, STL decomposition, and anomaly detection techniques. These methods offer high interpretability and are effective for early-stage diagnostics but show limitations in adapting to non-linear patterns and high-dimensional data structures typical of real-world industrial systems. The use of neural network models, in particular RNN, LSTM, GRU, and CNN, is proposed, as they enable the identification of temporal and spatial patterns in large data sets. The combination of statistical and neural network approaches makes it possible to build a hybrid model that better adapts to changing operating conditions of the object. Architecturally, the digital twin is envisioned as a modular, microservice-based system composed of dedicated layers for data collection, preprocessing, analytics, and visualization. The use of containerization and a service mesh is proposed to ensure scalability and component isolation. The article outlines a framework for evaluating model performance based on historical data fidelity, resilience to disturbances, and error metrics. Refs.: 4 titles.



  QUALITY,  RELIABILITY, AND CERTIFICATION OFCOMPUTER                                          TECHNIQUE AND SOFTWARE

UDC 519.254

This paper addresses the issues of automated clustering of product assortments in retail networks. The study is particularly relevant for retail chains with a large number of stores and a broad assortment exceeding 2,000–3,000 items. Under such conditions, comparative analysis of effective assortments requires substantial time and financial resources, as it is typically performed manually by analysts or with a limited level of automation. Optimization of assortment management processes can be achieved by grouping stores into homogeneous clusters, which simplifies managerial decision-making, enables standardization of assortment policies, and improves branding efficiency. Traditionally, store classification is based on sales area size and total assortment volume. However, these criteria do not always reflect the actual similarity of product offerings. This paper proposes an algorithm for automated store clustering based on the analysis of assortment overlap. The method is based on constructing a binary matrix representing product availability across stores and computing pairwise Jaccard distances between store assortments. Using the resulting distance matrix, hierarchical clustering is performed with dendrogram construction, allowing the determination of an optimal number of clusters according to predefined parameters. Additionally, the clustering results are visualized using the t-distributed Stochastic Neighbor Embedding (t-SNE) method, which provides an intuitive interpretation of the data structure in a reduced-dimensional space. The proposed approach enables not only the formation of store groups with a common core assortment but also the identification of product items that distinguish individual stores within the retail network. The obtained results can be applied to optimize assortment matrices, design effective store formats, and reduce assortment management costs in large retail chains. Таbl.: 3. Figs.: 4. Refs.: 8 titles.


UDC 519.718

Cespedes Garcia N.V. Reliability assessment of cluster computing systems. Mathematical machines and systems. 2026. N 1. P. 100–108.

The article is devoted to the study of international approaches to assessing the reliability of cluster computing systems and their combination with modern domestic methodologies. Cluster systems are widely used in fundamental and applied research, numerical modeling of complex physical processes, climate forecasting, analysis of large volumes of data, the aerospace and automotive industries, energy, cloud computing, and other areas. Therefore, assessing the reliability of cluster computing systems is critically important for ensuring their continuous operation and resistance to failures under conditions of intensive load. A comprehensive assessment of the reliability of cluster systems consists of the following stages: analysis of system structure components, calculation of reliability metrics, modeling of failure scenarios, analysis of combined failure probabilities, and assessment of redundancy. It has been determined that, in general, cluster systems have a so-called «k» with «n» structure, therefore, reliability metrics are calculated according to the corresponding domestic methodologies. In the general case, failures of cluster computing systems are non-monotonic in nature. Consequently, to calculate the probability of a failure-free operation, modern domestic methodologies based on the DN-distribution of failures can be used. Assessment of the reliability of cluster computing systems is a multi-level approach that combines analytical, experimental, and model research methods. Such a comprehensive approach allows the identification of potential bottlenecks in the system architecture, identify critical components, and assess the risks of failures. This enables timely implementation of effective mechanisms for fault tolerance, redundancy, and automatic recovery. Refs.: 10 titles.


       
                                 DISCUSSION MESSAGES

UDC 004.9

Malyshev O.V. Capability inventory: a positioned review of acquired experience (using the military sphere as an example). Mathematical machines and systems. 2026. N 1. P. 109–126.

Entities engaged in the development of their own capacities cannot bypass the need to conduct their inventory (accounting). Large-scale and systematic fulfillment of this need is observed in the military sphere, for example, in the processes of building up the defense forces of NATO member states and, more recently, of Ukraine. Activities related to capability inventory mainly take the form of creating lists or catalogs of capabilities, and their relatively short history indicates the presence of a research component. Improving the inventory process cannot do without conducting reviews of the acquired experience. Such a review should be positioned and use the basis for forming an attitude towards individual aspects of the phenomenon being reviewed. The formation of the initial position for the review is based on a systematic consideration of the subject area and its components: entities (capability, capability carrier, capability application process, capability level, capability level requirement, capability level assessment, etc.) and relations between entities (between individual capabilities or between capability carriers, as well as between capabilities and their carriers, etc.). Applying the formed position to the review made it possible to identify some deviations in the studied material, which can be classified as cases of using dubious or redundant concepts, mixing concepts, and also substitution of concepts. The focus on presenting inventory results in the form of text documents also appears problematic. The review results are intended for use in the processes of developing information technologies to support not only the inventory of capabilities but also the management of capabilities in general. Таbl.: 3. Refs.: 19 titles.


                                           INFORMATION 

Sapaty P.S. Managing multidimensional international world with spatial grasp model. Mathematical machines and systems. 2026. N 1. P. 127–132.

The term «multidimensional international world» refers to understanding the world through multiple dimensions beyond traditional economic or political measures, fostering cross-cultural collaboration, and creating systems that balance global integration with local needs. This also includes management of global business operations across diverse cultures in a multipolar international landscape. The aim of this book is to review and explain how the distributed international world is organized and to investigate the potential applicability of the developed and already tested in numerous applications of high-level Spatial Grasp Model and Technology (SGT) which can manage complex systems with a holistic spatial approach effectively covering physical and virtual dimensions, their interrelations, and integration as a whole. The book briefs different multidimensional areas with examples of practical solutions in them and their combinations in a high-level Spatial Grasp Language (SGL), the key element of SGT. This can allow for the creation and distributed management of very large spatial networks expressing different dimensions, which can be self-analyzing, self-optimizing, and self-recovering in complex terrestrial and celestial environments. The book also organizes dynamic multi-networking solutions effectively supporting global evolution, security, prosperity, and integrity.


 


            

       Last modified: Mar 12, 2026