The article is devoted to solving the problem of building automated management protection of critical infrastructure (CI) resilience functioning, which can be carried out in crisis (situation) centers (CSC). The article conducted a study, built a scheme, and defined and considered the main directions of such protection. The types of threats to CI objects (CIO) were determined. CIO were determined as complex interdependent structures, and the main types of interdependencies between them have been analyzed and identified.The Leontiev model for economic interdependence between CIs was presented. The main phases of the risk management process were analyzed. It was proposed to use a three-level situational awareness (SA) model in the CSC together with the use of innovative technologies. Current SA and risk assessment make it possible to create various scenarios of emergency deployment, on the basis of which modeling of protection by resilience functioning of CI is built. Two approaches to creating scenarios were considered— a retrospective approach and an approach based on a chain of likelihood. A step-by-step approach to building a chain of proto-similarity was depicted. Based on the analysis of approaches to modeling, it was determined that each of the approaches is intended for individual types or conditions of CI functioning. Therefore, it is expedient to use a combination of approaches, bottom-up and top-down modeling approaches, as well as a systemic approach. The task was considered and the British concept and general model of crisis management of CI resilience were presented. The role and methodology of four points of view of organizational resilience of CIOs were determined and analyzed. Based on the research results, the main requirements for automated management of the protection of the resilience functioning of the CIO, which can be applied in the CSC, were determined.Figs.: 4. Refs.: 18 titles.
The rapid growth of connected objects in IoT systems has led to a growing demand for intelligent systems for automated and simplified management of everyday life. These systems aim to simplify the management of functions such as lighting, heating, security, and appliance operation, contributing to comfort and energy efficiency. From this perspective, situational context, which encompasses information about the physical environment, user preferences, current activities, and other relevant variables, can significantly enhance the ability of IoT systems to forecast and respond to the needs of residents. However, integrating situational context into IoT systems presents significant technical and conceptual challenges. One of the key challenges is the development of accurate and dynamic situational models that can take into account diverse and changing parameters. In addition, the need to process large amounts of data from multiple IoT devices in real time requires a reliable and scalable architecture. In the field of the Internet of Things, where new technologies transform our living environment into dynamic and adaptive ecosystems, the ability of situational modeling and inference becomes an important support for ensuring intelligent and personalized interaction between users and their environment. The agent-oriented approach is a natural way to build various natural and artificial systems that differ in their respective complexity, dynamism, situation, and autonomy. In particular, a strong conceptual connection exists between agents and intelligent IoT components, as well as between multi-agent systems and IoT. Thus, taking into account the entire set of requirements and problems associated with the development of Internet of Things systems, the agent-oriented approach can be used to model, program, and simulate Internet of Things applications and systems and systematically promote and accelerate their development. The aspects of situational modeling and inference considered in the article allow for forming a technological basis for intelligent support of the processes of functioning in Internet of Things systems.Refs.: 16 titles.
Penetration testing is one of the primary methods for assessing the cybersecurity level of information systems and a means of improving it. The security evaluation of a system or network is conducted by partially simulating the actions of external attackers attempting to infiltrate it, as well as internal malicious actors. This process is based on an active analysis of the system to identify any potential vulnerabilities that may arise due to improper system configuration, known and unknown hardware and software defects, or delays in procedural or technical countermeasures. Such analysis is conducted from the perspective of a potential attacker and must include the active exploitation of system vulnerabilities, at least during the portion of testing performed by an expert. Typically, a penetration test reveals a certain set of vulnerabilities within the tested system. The gathered information is compiled into a standardized report presented to the system owner. A crucial part of this report is the analysis, which combines this information with a detailed assessment of the potential impact on the organization and outlines the scope of technical and procedural countermeasures to mitigate risks.The primary advantage of penetration testing is the early identification of risks, since it helps to detect weaknesses before they can be exploited by malicious actors. Additionally, regular testing enables organizations to enhance their reputation by demonstrating responsibility in cybersecurity and cyber defense. Overall, penetration testing is becoming increasingly mandatory across various regions and industries, as it helps organizations comply with standards such as ISO 27001, SOC 2, PCI DSS, and others. Today, the development of penetration testing aligns with trends toward further automation, the use of cloud-based tools, artificial intelligence in vulnerability analysis, and the growing role of tests incorporating social engineering techniques. Penetration testing will continue to expand and develop both methodologically and instrumentally. Refs.: 11 titles.
When implementing modern technologies of agro-industrial production, there is a need to work with a large amount of data for making operational decisions. Constant monitoring of the agrobiological state of agricultural lands is a key component of modern agricultural production. Its implementation is possible through the use of various information and technical means of ground, air, and satellite type, which are used at different stages of crop development. As a result of such activities, significant amounts of information are accumulated that require operational processing, analysis, visualization, and presentation in a format convenient for making management decisions. In this regard, there is a need to systematize data obtained from various sources of monitoring the agrobiological state, which allows you to effectively manage technological processes and optimize costs. This approach contributes to the prompt determination of the norms for the introduction of technological material (fertilizers, seeds, plant protection products), which, in turn, lets you reduce the costs of growing agricultural crops by optimizing them. For prompt decision-making, it is advisable to create software that allows you to display zones of agrobiological heterogeneity on the map with smooth transitions between different values. This will help you accurately determine areas with a similar biological state for a more rational use of technological materials. The developed software product gives an opportunity to implement differentiated applications of resources based on data from information and technical monitoring systems. Preliminary calculations show that possession of such information allows you to save up to 10–25% of technological materials and also helps to increase yields by 10–20quintals per hectare. Refs.:12titles.
The achieved level of scientific and technical development has led to significant changes in the professional structure of society. Decisions made related to the collection and processing of information, time constraints, and the incompleteness of known constraints have become critically important. The functioning of large industrial complexes and other complex systems often depends on the capabilities of an individual process operator, on their interaction with other people and mechanisms, as well as their psychophysiological state. Increasingly, human errors lead to accidents with severe consequences. The reliability of human work as part of an automated control system is determined by the probability of their successful work or task completion at the stage of system operation within a given time. To ensure the safety and readiness of the system as a whole, coordination of the technical system and human actions is required. As studies show, taking into account the reliability of a person in interaction with the system, a significant potential for automation is created. For its implementation, it is necessary to have information and models that will provide data on both human abilities and possible errors. The human factor characterizes the interaction of a person with a controlled object with an orientation on the quality and reliability of the technological process, as well as its results. The article considers the issues of reliability of human work in the ACS, methods for diagnosing human reliability as a link in the human-machine system (HMS), analyzes the factors that determine the reliability of the human operator’s work, and considers methods and technical tools for current control of their psychophysiological state. The active role of a person at all stages of the HMS life cycle is shown, starting with the development of the concept, then design, implementation, operation and further improvement of the system. Fig.: 1. Refs.: 8 titles.
The article is devoted to a comparative analysis of the effectiveness of traditional econometric models and artificial intelligence models for forecasting the bond yield curve by assessing the forecasting accuracy and resilience to changes in market conditions. The study uses quarterly data from the Federal Reserve Economic Database (FRED) for the period 2011–2024 for ten maturities of US government bonds. The study is relevant in the context of growing volatility in financial markets and the need for accurate forecasts for investment decisions and monetary policy making. Three traditional models (DNS, Svensson, VAR) and three artificial intelligence models (DNN, CNN-LSTM, XGBoost) were compared. Traditional models were parameterized using the least squares method, and artificial intelligence models were parameterized using a sliding window approach and cross-validation. To ensure the statistical significance of the results, artificial intelligence models were trained and tested in 20 independent experiments. The models were evaluated by RMSE, MAE, and the coefficient of determination R². Artificial intelligence models demonstrated significantly higher forecasting accuracy: DNN showed the best results with RMSE of 0.960 and MAE of 0.832, which is three times better than traditional models. At the same time, traditional models showed greater resilience to changes in market conditions with resilience coefficients close to 1, while AI models showed higher sensitivity to volatility. Statistical analysis confirmed the significance of differences in forecasting accuracy between the models (p<0.05). The lowest accuracy among all the models was demonstrated by the VAR model, which is explained by its limited ability to capture nonlinear dependencies. The proposed models are characterized by a different ratio of accuracy and stability, which allows for choosing the optimal approach depending on specific practical tasks.Таbl.: 2. Figs.: 4. Refs.: 7 titles.
The present article is devoted to the issue of using an attribute model for assessing the dependability of computer systems. The model was developed at the Institute of Mathematical Machines and Systems Problems of the National Academy of Sciences of Ukraine and implemented in new buildings in Kyiv for building dispatching systems for elevators and other engineering equipment in the municipal sector. The distinguishing characteristic of the developed tools is their focus on the interaction between individual elements of the system and the central automated workplace of the dispatcher through existing local and global networks. The created hardware and software facilitate the incorporation of existing buildings into the dispatching system and contribute to a reduction in dispatching costs during the design of new residential complexes. The paper studies the fundamental requirements and principles underlying the construction of a dispatching system. It delves into the challenges encountered and proposes strategies for enhancing the reliability of the dispatching system of utility engineering systems. Such systems are distinguished by a continuous cycle of operation and are defined as critical systems. The attribute model of computer systems dependability is a pivotal component of dependability assessment, enabling the determination of the necessity to employ specific characteristics (attributes). The dependability of the obtained reliability assessment is contingent upon the completeness and adequacy of the utilized system of characteristics. The following dependability attributes have been evaluated: availability, maintainability, compatibility, and their impact on the overall performance. The specifics of the widespread use of single-chip microcomputers in the construction of the system have been considered. The positive and negative impacts of operating software on single-chip microcomputers without the use of an operating system have been identified in relation to the mentioned attributes and their metrics. Таbl.: 1. Figs.: 2. Refs.: 3 titles.
QUALITY, RELIABILITY, AND CERTIFICATION OFCOMPUTER TECHNIQUE AND SOFTWARE
An analysis of existing solutions was conducted to support the unification of a newly developed industrial equipment reliability management system. Based on a review of the advantages and disadvantages of current predictive analytics platforms, conclusions regarding the future development prospects of such systems were drawn. The article presents a general concept of a comprehensive reliability management system for industrial equipment. The proposed approach is grounded in the application of predictive analytics, machine learning, probabilistic-physics degradation models, and IoT component integration into a digital control infrastructure. The system enables failure prediction, RUL estimation, and automated decision-making for maintenance scheduling. The paper details the scientific and technical implementation aspects, including the deployment of mathematical models, principles of anomaly detection, and risk evaluation. For local data storage, a data architecture is proposed that is optimized for stream processing and combines storage for both structured and semi-structured data. The core idea lies in a hybrid methodology, combining neural network-based modeling with physical studies of material degradation and component failure statistics to accurately assess reliability, forecast remaining useful life, and determine regulated operational lifetimes. A probabilistic-physics approach is proposed, employing advanced failure models with physically interpretable parameters such as the mean degradation rate and the coefficient of variation of the generalized degradation process. To implement this, the development of a unified predictive system for reliability management is proposed. The identification of threshold patterns — parameter values or system states that define the resource limits — is assigned to ML/AI models trained on operational data. The study also highlights the practical relevance of the proposed technology for critical infrastructure. The system supports localized data processing, reducing the need for cloud infrastructure and minimizing deployment costs. Tabl.: 1. Fig.: 1. Refs.: 13 titles.
The use of an agent-based approach to the implementation of information technology for intelligent monitoring allows for obtaining special forms of multi-agent systems (MAS) to support decision-making in various subject areas. The structure of MAS for intelligent monitoring is formed by a set of software agents for various purposes. This paper presents the results of research on one of the MAS elements ― a monitoring software agent. In this system, it ensures the implementation of typical intellectual tasks of classification, identification, forecasting, and others. In the context of crisis monitoring, to obtain a useful agent model, methods of increasing the diversity of an agent-based model synthesizer (AMS), in particular, multilayer modelling, are used. Models with a heterogeneous structure and the same modelled indicator are combined into layers. Recirculation is one of the popular methods of multilayer modelling. The signal from the model output is added to the input data set (IDS) as an additional feature, and the IDS is sent to the input of the same AMS. The agent-based model synthesizer determines which AMS will be used adaptively to the properties of each subsequent input data set. Before synthesizing the model, each of the existing AMS is tested, and the best one is selected according to the specified criteria. If it is not possible to build a useful model, the synthesizer will adjust the AMS with a more complex structure. Improving the processes of adaptive construction of multilayer model ensembles using the recirculation method allows us to expand the capabilities of agent-based model synthesizers. This paper presents the results of applying machine learning methods to build algorithms for the synthesis of multilayer models using the recirculation method and strategies for the adaptive selection of the best AMS. The increase in the accuracy of modelling results has been experimentally confirmed. The obtained results allow us to build rules for the behaviour of a software agent and formulate requirements for MAS software. Таbl.: 6. Figs.: 6. Refs.: 33 titles.
In recent years, in the public and private sectors of the economy, as well as in the administrative and defense ones, measures to improve the performance of organizations within them have been increasingly accompanied by greater attention to their capabilities. The capabilities of an organization can change due to changes in its various components: personnel, use of knowledge and information, infrastructure, etc., which have long been recognized and considered as objects of management, and the organization of such management is supported by the use of the best practices and standards of various levels, including the international ones. So isn’t it time to add capabilities to the list of typical management objects and try to consider the specifics of capability management from a scientific perspective? Such a consideration should be made at the highest possible level, and its result should be what can be called a conceptual model of capacity management as a specific type of management activity. Such a conceptual model should include at least the following components: 1) conceptual and terminological apparatus; 2) high-level (meta-) model of activity. This paper is devoted to the formation of the first of them. The fundamental point of its implementation is that it does not start from scratch: the dictionary of terms contained in the fundamental standard for quality management DSTU ISO 9000 is used as a starting point. Although the issues of this standard do not coincide with the subject of the current study, they are quite close to it and developed. In particular, the dictionary contains a definition of the term «capability». However, this does not mean that the set of terms contained in the dictionary fully covers the needs of this research, and the definition of terms that should be borrowed from the dictionary is the absolute truth. Accordingly, one of the work results is a reasoned criticism of some decisions made by the developers of the original standard.Refs.: 29 titles.