Спросить
Войти

Refactoring of the OpenVZ driver for the OpenNebula cloud platform within the JINR cloud infrastructure Project

Автор: Korenkov Vladimir V.

Том 14, № 1. 2018

ISSN 2411-1473

sitito.cs.msu.ru

Параллельное и распределенное программирование, грид-технологии, программирование на графических процессорах

Parallel and distributed programming, grid technologies, programming on GPUs

УДК 004.454

DOI: 10.25559/SITITO.14.201801.052-060

REFACTORING OF THE OPENVZ DRIVER FOR THE OPENNEBULA CLOUD PLATFORM WITHIN THE JINR CLOUD INFRASTRUCTURE PROJECT

Vladimir V. Korenkov, Andrey O. Kondratyev

Joint Institute for Nuclear Research, Dubna, Russia

Abstract

This article explores the possibilities of using OpenVZ virtualization technology in the cloud infrastructure of the Joint Institute for Nuclear Research (JINR), built on the OpenNebula platform.

OpenNebula is an open and extensible cloud platform allowing an easy automation of data centers operations. Simplicity of deployment and configuration of virtual machines without the help of specialists is the main advantage of the OpenNebula platform.

Also, the main advantages of this cloud platform include the ability to dynamically change the size of the physical infrastructure by adding and removing nodes in real time and splitting the cluster into virtual partitions, which allows us to allocate only the necessary amount of resources for the operation of a certain service. OpenNebula provides a centralized interface for managing all elements of the infrastructure, both virtual and physical, and also has a high degree of utilization of available resources.

Initially, OpenNebula does not support the OpenVZ virtualization technology, however its modular architecture allows using third-party drivers, in particular for OpenVZ. To deploy the virtualization technology within OpenNebula, a driver was developed by JINR and the Bogolyubov Institute of Theoretical Physics.

About the authors:

Vladimir V. Korenkov, Doctor of Technical Sciences, Professor, Director of the Laboratory of Information Technologies, Joint Institute for Nuclear Research (6 Joliot-Curie St., Dubna 141980, Moscow region, Russia); ORCID: http://orcid.org/0000-0002-2342-7862, korenkov@cv.jinr.ru

Andrey O. Kondratyev, software engineer, Laboratory of Information Technologies, Joint Institute for Nuclear Research (6 Joliot-Curie St., Dubna 141980, Moscow region, Russia); ORCID: http://orcid.org/0000-0001-6203-9160, kondratyev@jinr.ru

© Когеп^ Kondratyev А.О., 2018

Vol. 14, no. 1. 2018 ISSN2411-1473 sitito.cs.msu.ru

The main advantage of OpenVZ visualization technology is the ability to run multiple isolated copies of the operating system on a single physical server.

The authors of this article had to refactor the code of the existing OpenVZ driver for the OpenNebula cloud platform. The work was done using the Ruby software environment. The results obtained are currently used in theJINR cloud infrastructure.

Refactoring; OpenVZ; OpenNebula; cloud technologies; cloud infrastructure.

РЕФАКТОРИНГ ДРАЙВЕРА OPENVZ ДЛЯ ОБЛАЧНОЙ ПЛАТФОРМЫ OPENNEBULA В РАМКАХ ОБЛАЧНОЙ ИНФРАСТРУКТУРЫ ОИЯИ

В.В. Кореньков, А.О. Кондратьев

Объединенный институт ядерных исследований, г. Дубна, Россия

Аннотация

В данной статье рассматриваются возможности применения технологии виртуализации OpenVZ в облачной инфраструктуре Объединенного института ядерных исследований (ОИЯИ), построенной на платформе OpenNebula.

OpenNebula - облачная платформа, представляющая собой открытый и расширяемый инструмент для автоматизации работы центров обработки данных. Эта платформа предоставляет возможность самостоятельного управления вычислительными ресурсами с использованием облачной инфраструктуры. Простота развертывания и настройки виртуальных машин без помощи специалистов является главным достоинством данной платформы.

К основным достоинствам платформы OpenNebula можно отнести возможность динамически изменять размер физической инфраструктуры через добавление и удаление узлов в реальном времени и разбиение кластера на виртуальные разделы, что позволяет выделить только необходимый объем ресурсов для работы определенного сервиса. OpenNebula предоставляет централизованный интерфейс для управления всеми элементами инфраструктуры как виртуальными, так и физическими, а также имеет высокую степень задействования доступных ресурсов.

Изначально, облачная платформа не имеет поддержки технологии виртуализации OpenVZ, однако её модульная архитектура позволяет использовать драйверы сторонних разработчиков, в частности для OpenVZ. Для развертывания технологии виртуализации в рамках OpenNebula использовался драйвер, разработанный ОИЯИ и Институтом теоретической физики имени Н. Н. Боголюбова.

Основным преимуществом технологии виртуализации OpenVZ является возможность запуска множества изолированных копий операционной системы на одном физическом сервере.

Перед авторами данной статьи была поставлена следующая задача: выполнить рефакторинг кода существующего драйвера OpenVZ для облачной платформы OpenNebula. Работа была выполнена в программной среде Ruby. Полученные результаты используются в облачной инфраструктуре ОИЯИ.

Том 14, № 1. 2018 ISSN2411-1473 sitito.cs.msu.ru

Рефакторинг; OpenVZ; OpenNebula; облачные технологии; облачная инфраструктура.

Introduction

At present, the analysis of data obtained in the high-energy physics experiments requires a solution of a large range of problems in the field of modern information technology (IT). The most common solutions in this area are grid technologies [1, 2, 3, 4, 5, 6], cloud computing technologies (cloud technologies) and supercomputers. Each of these solutions has its own benefits and scopes of application.

For example, the main direction of supercomputers application is the modeling of physical processes and phenomena. So are the problems of plasma physics, stars, and so on. These tasks are related to such areas of information technology as 3D-computer graphics that is required for analysis of calculation results, program optimization, network technologies, creation of parallel algorithms and distributed systems.

Grid technologies constitute a geographically distributed infrastructure that provides remote access to various resources. The grid concept assumes a collective shared mode of access to resources within virtual organizations, consisting of collaborations and individual specialists sharing common resources [7, 8, 9, 10]. Modern grid-based infrastructures provide integration of resources of various organizations into a single computing environment, which allows solving tasks for processing extremely large amounts of data.

Cloud technology is a model for providing network access to information and computing resources, such as: services, applications, storage devices, servers, data networks. They can reduce the cost of the IT infrastructure, as well as meet the changing needs for resources.

The most common OpenSource solutions in the field of cloud technologies are OpenStack [11, 12, 13] and OpenNebula [14, 15, 16, 17, 18, 19].

OpenStack consists of several modules interacting with each other through the service catalog. To date, the platform includes: a hypervisor management tool, object storage, a web management interface, a virtual machine image store, a network infrastructure management tool, a user and service catalog, and so on. Unfortunately,

the basic capabilities of the OpenStack cloud infrastructure are quite limited. There is no possibility of smooth adjustment of resource consumption, as well as migration of the container to another host system. The basic capabilities of OpenStack do not provide container backup and do not support the OpenVZ virtualization technology [20]. That is why, the Joint Institute for Nuclear Research, decided to use OpenNebula, an open, extensible cloud-based automation platform for data center management to manage cloud infrastructure [21, 22, 23, 24, 25, 26, 27, 28]. It provides a possibility to use the cloud infrastructure for independent management of processing, storage, networks and other computing resources, as well as to ensure the operation of the hybrid system by combining the resources of the local data center and external cloud providers. OpenNebula includes tools for deploying virtual environments, monitoring, access control, security, and storage management.

This platform allows users to create and configure virtual machines on their own, and this is its main advantage. By default, OpenNebula allows you to deploy only KVM [29, 30] virtual machines.

KVM is a hardware virtualization technology. Thanks to this, it is possible to install any operating system, including Windows. However, this technology is more demanding for resources, and also it requires a number of additional settings, for example: configuring the network infrastructure for the virtual machine.

The modular architecture OpenNebula makes it possible to use the driver for deploying OpenVZ containers, consisting of a set of unrelated scripts called out when using a particular container management function.

Unlike KVM, OpenVZ is an implementation of the virtualization technology at the operating system level, which makes it possible to run many isolated copies of the operating system (so-called containers) on one physical server.

Virtualization at the system level in OpenVZ offers many advantages:

• easy to administrate,

• dense location of virtual containers in the host system,

Vol. 14, no. 1. 2018

ISSN 2411-1473

sitito.cs.msu.ru

• better performance in comparison with full virtualization technologies.

Improved performance is achieved through the operation of all containers on one core - the core of the host system.

Purpose of the study

The purpose of this article is to bring an existing driver for managing OpenVZ containers to the standards of object-oriented programming, as well as optimizing for subsequent updates of OpenNebula and OpenVZ.

Main part

As stated above, the OpenNebula cloud platform does not support the OpenVZ virtualization technology. However, its modular architecture allows the use of third-party drivers.

Therefore, together with the Bogolyubov Institute of Theoretical Physics, JINR has developed a driver for using the OpenVZ virtualization technology in the OpenNebula cloud platform. The core of the OpenNebula cloud platform is implemented using the C ++ high-level programming language, however, to maintain consistency, third-party developers are encouraged to use the Ruby language [31] to develop drivers.

Since the management interfaces are implemented in Ruby, the driver was also developed in this programming language.

Ruby is a very well-balanced and flexible programming language, as it allows users to freely change its parts. The main parts of Ruby can be deleted or redefined, and existing ones are can be modified.

In this programming language, when describing any method, a closure (block) can be added to it, as well as other useful features, such as: constructions for exception handling, the ability to dynamically connect third-party libraries, the possibility of using multithreading independently of the operating system, etc.

The developed driver is a set of unrelated scripts for executing container management commands in OpenVZ such as:

• deploy - is used to create and deploy a

container;

• cancel - the script used to remove the container;

• migrate - is used to transfer an existing container to another host system;

• reboot - is required to reboot the deployed and currently running container;

• restore - the script that restores and starts a previously stopped container;

• save - is required to save the container (is used to correctly stop the operation of the container);

• shutdown - the script is used to shut down the container;

• snapshotcreate - creates a backup of an existing container;

• snapshotdelete - is used to delete a backup copy of the container;

• snapshotrevert - the script uses a backup copy of the container to restore it.

These scripts refer to the file "comm_common", which stores auxiliary methods and functions for implementing container management methods.

To perform operations with the container, the front-end cloud platform, OpenNebula refers to the corresponding driver script, which in turn interacts with OpenVZ.

In Fig. 1. the scheme of the developed driver is presented.

With the release of updates to the OpenNebula cloud platform, the driver was also updated and refined to ensure correct operation with OpenVZ virtualization technology, but the driver architecture remained unchanged. However, due to the need to adapt the driver for the future version of OpenVZ 7, as well as increasing the number of developers, it became necessary to understand deeply the internal design of the driver. Therefore, the authors of this article were faced with the following task: to perform refactoring, i.e. controlled process of code improvement, without writing new functionality, the existing OpenVZ driver for the OpenNebula cloud platform.

Том 14, № 1. 2018 ISSN2411-1473 sitito.cs.msu.ru

Fig. 1. Diagram of the developed driver

Fig. 2. Driver scheme after refactoring

For this purpose, the test site in the cloud infrastructure of JINR was deployed and configured. The given polygon represents the decision consisting of two virtual machines, the front-end system is installed on one of them, and its tail is on the second one.

The correct operation of the test polygon requires synchronization of driver data between the front-end and the host.

In the course of the work, a single OneDriver library was created, including all the container management methods necessary for the driver to work correctly, as well as the auxiliary methods and functions involved in the operation of the container. In OpenNebula, the rules for third-party developers of drivers provide the possibility to call container management functions from the same scripts, for example: the command to create and deploy a

Vol. 14, no. 1. 2Q18

ISSN 2411-1473

sitito.cs.msu.ru

deploy container must be called from the same file. Therefore, the corresponding scripts were created, in which the Onedriver library object is created, and then the container management method is called.

Fig. 2 shows the driver circuit after the refactoring.

As a result of the work done, the existing driver for managing the OpenVZ virtualization technology within the framework of the OpenNebula cloud infrastructure was refactored. After the refactoring, the driver is aligned with the standards of object-oriented programming, in contrast to the procedural style used before refactoring. The response time of the control commands and

container management commands was more than halved. The response time was estimated from the data of the OpenNebula log files. Also, the performed work allowed us to optimize the existing driver for future updates of OpenNebula and OpenVZ.

Conclusion

This article explored the possibilities of using the OpenVZ virtualization technology in the cloud infrastructure of the Joint Institute for Nuclear Research, built on the OpenNebula platform. The code for the existing OpenVZ driver for the OpenNebula cloud platform was refactored. The results obtained are used in the JINR cloud infrastructure.

REFERENCES

[1] Foster I., Kesselman C. (Eds.) The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998. Available at: https://dl.acm.org/citation.cfm?id=289914 (accessed 16.01.2018).

[2] Korenkov V.V., Nechaevskiy A.V., Ososkov G.A., Pryakhina D.I., Trofimov V.V., Uzhinskiy A.V. GRID and cloud services simulation as an important step of their development. Systems and Means of Informatics. 2015; 25(1):4-19. (In Russian) DOI: https://doi.org/10.14357/08696527150101

[3] Dolbilov A., Korenkov V., Mitsyn V., Palichik V., Shmatov S., Strizh T., Tikhonenko E., Trofimov V., Voytishin N. Grid technologies for large-scale projects. Proceedings of 2015 IEEE Conference Grid, Cloud High Performance Computing in Science (ROLCG). Cluj-Napoca, Romania, 2015. p. 1-3. DOI: https://doi.org/10.1109/ROLCG.2015.7367422

[4] Korenkov V.V., Nechaevskiy A.V., Ososkov G.A., Pryahina D.I., Trofimov V.V., Uzhinskiy A.V. Synthesis of the simulation and monitoring processes for the development of big data storage and processing facilities in physical experiments. Computer Research and Modeling. 2015; 7(3):691-698. Available at: http://crm.ics.org.ru/uploads/crmissues/crm_2015_3/15742.pdf (accessed 16.01.2018). (In Russian)

[5] Belov S.D., Dmitrienko P.V., Galaktionov V.V., Gromova N.I., Kadochnikov I.S., Korenkov V.V., Kutovskiy N.A., Mitsyn S.V., Mitsyn V.V., Oleynik D.A., Petrosyan A.S., Shabratova G.S., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Uzhinskiy A.V., Valova L., Zhemchugov A.S., Zhiltsov V.E. JINR Participation in the WLCG Project. LIT Scientific Report 2012 - 2013. Dubna: LIT JINR. p. 21-25. Available at: http://lit.jinr.ru/Reports/SC_report_12-13/p21.pdf (accessed 16.01.2018).

[6] Astakhov N.S., Belov S.D., Dmitrienko P.V., Dolbilov A.G., Gorbunov I.N., Korenkov V.V., Mitsyn V.V., Shmatov S.V., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Zhiltsov V.E. JINR Tier-1 Center. LIT Scientific Report 2012-2013. Dubna: LIT JINR. p. 16-20. Available at: http://lit.jinr.ru/Reports/SC_report_12-13/p16.pdf (accessed 16.01.2018).

[7] Astakhov N.S., Baginyan A.S., Belov S.D., Dolbilov A.G., Golunov A.O., Gorbunov I.N., Gromova N.I., Kadochnikov I.S., Kashunin I.A., Korenkov V.V., Mitsyn V.V., Pelevanyuk I.S., Shmatov S.V., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Voitishin N.N., Zhiltsov V.E. JINR Tier-1 centre for the CMS experiment at LHC. Physics of Particles and Nuclei Letters. 2016; 13(5):714-717. DOI: https://doi.org/10.1134/S1547477116050046

[8] Berezhnaya A., Dolbilov A., Ilyin V., Korenkov V., Lazin Y., Lyalin I., Mitsyn V., Ryabinkin E., Shmatov S., Strizh T., Tikhonenko E., Tkachenko I., Trofimov V., Velikhov V., Zhiltsov V. LHC Grid Computing in Russia: present and future. Journal of Physics: Conference Series. 2014. Vol. 513, Track 6, id. 062041. DOI: https://doi.org/10.1088/1742-6596/513/6Z062041

[9] Astakhov N.S., Belov S.D., Gorbunov I.N., Dmitrienko P.V., Dolbilov A.G., Zhiltsov V.E., Korenkov V.V., Mitsyn V.V., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Shmatov S.V. The Tier-1-level computing system of data processing for the CMS experiment at the large hadron collider. Journal of Information Technologies and Computing Systems. 2013; 4:27-36. Available at: http://www.jitcs.ru/images/documents/2013-04/27_36.pdf (accessed 16.01.2018). (In Russian)

[10] Filozova I.A., Bashashin M.V., Korenkov V.V., Kuniaev S.V., Musulmanbekov G., Semenov R.N., Shestakova G.V., Strizh T.A., Ustenko P.V., Zaikina T.N. Concept of JINR Corporate Information System. Physics of Particles and Nuclei Letters. 2016; 5(13):625-628. DOI: https://doi.org/10.1134/S15474

[11] OpenStack. Available at: https://openstack.org (accessed 16.01.2018).

[12] Markelov A. Cloud operating system OpenStack. Part 1. Introduction. System Administrator. 2015; 4(149):16-19. (In Russian)

[13] Tsvetkov V.Y., Deshko I.P. Cloud service. Educational resources and technologies. 2016; 3(15):88-95. (In Russian) DOI: https://doi.org/2312-5500-2016-3-88-95

[14] Software - OpenNebula. Available at: https://opennebula.org (accessed 16.01.2018).

[15] Aiftimiei D.C., Fattibene E., Gargana R., Panella M., Salomoni D. Abstracting application deployment on Cloud infrastructures. IOP Conf. Series: Journal of Physics: Conference Series. 2017. Vol. 898, Track 6: Infrastructures, id. 082053. DOI: https://doi.org/10.1088/1742-6596/898/8/082053

Том 14, № 1. 2018 ISSN2411-1473 sitito.cs.msu.ru

[16] Taylor R.P., Berghaus F., Brasolin F., Cordiero C.J.D., Desmarais R., Field L., Gable I., Giordano D., Girolamo A., Hover J., LeBlanc M., Love P., Paterson M., Sobie R., Zaytsev A. The Evolution of Cloud Computing in ATLAS. Journal of Physics: Conference Series. 2015. Vol. 664, Clouds and Virtualization, id. 022038. DOI: https://doi.org/10.1088/1742-6596/664/2/022038

[17] Bagnasco S., Bernazo D., Lusso S., Masera M., Vallero S. Managing competing elastic Grid and scientific computing applications using OpenNebula. Journal of Physics: Conference Series. 2015. Vol. 664, Clouds and Virtualization, id. 022004. DOI: https://doi.org/10.1088/1742-6596/664/2Z022004

[18] Bagnasco S., Vallero S., Zaccolo V. A FairShare Scheduling Service for OpenNebula. Journal of Physics: Conference series. 2017. Vol. 898, Track 7: Middleware, Monitoring and Accounting, id. 092037. DOI: https://doi.org/10.1088/1742-6596/898/9/092037

[19] Kutovskiy N.A., Nechaevskiy A.V., Ososkov G.A., Pryahina D.I., Trofimov V.V. Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure. Computer Research and Modeling. 2017; 9(6):955-963. (In Russian) DOI: https://doi.org/ 10.20537/2076-7633-2017-9-6-955-963

[20] OpenVZ. Available at: https://openvz.org (accessed 16.01.2018).

[21] Baranov A.V., Balashov N.A., Kutovskiy N.A., Semenov R.N. JINR cloud infrastructure evolution. Physics of Particles and Nuclei Letters. 2016; 13(5):672 - 675. DOI: https://doi.org/10.1134/S1547477116050071

[22] Baranov A.V., Korenkov V.V., Yurchenko V.V., Balashov N.A., Kutovskiy N.A., Semenov R.N., Svistunov S.Y. Approaches to cloud infrastructures integration. Computer Research and Modeling. 2016; 8(3):583-590. Available at: https://elibrary.ru/item.asp?id=26323286 (accessed 16.01.2018). (In Russian)

[23] Korenkov V.V., Kutovskiy N.A., Balashov N.A., Baranov A.V., Semenov R.N. JINR cloud infrastructure. Procedia Computer Science. 2015; 66:574-583. DOI: https://doi.org/10.1016/j.procs.2015.11.065

[24] Baranov A.V., Balashov N.A., Kutovskiy N.A., Semenov R.N. Cloud Infrastructure at JINR. Computer Research and Modeling. 2015; 7(3):463-467. Available at: http://crm.ics.org.ru/uploads/crmissues/crm_2015_3/157zam.pdf (accessed 16.01.2018).

[25] Balashov N.A., Baranov A.V., Kutovsky N.A., Semenov R.N. Use of cloud technologies in LIT JINR. Proceedings of the All-Russian Conference with International Participation «Information and Telecommunication Technologies and Mathematical Modeling of Hightech Systems 2014», April 22 - 25, 2014, Moscow: RUDN, 2014. p. 168 - 170. (In Russian)

[26] Balashov N., Baranov A., Kutovsky N., Semenov R. Cloud Infrastructure. Proceedings of the XVIII Scientific Conference of Young Scientists and Specialists of JINR (OMUS-2014), 24-28 February, 2014, Dubna: JINR, 2014. p. 190 - 193. (In Russian)

[27] Balashov N., Baranov A., Kutovskiy N., Semenov R. Cloud Technologies Application at JINR. Proceedings of the 8th international conference «Information Systems GRID Technologies», 30 - 31 May, 2014, Sofia, Bulgaria, 2014. p. 32 - 37.

[28] Balashov N.A., Baranov A.V., Kadochnikov I.S., Korenkov V.V., Kutovsky N.A., Nechaevsky A.V., Pelevanyuk I.S. Software complex for intelligent scheduling and adaptive self-organization of virtual computing resources based in LIT JINR Cloud Center. Izvestiya SFU. Engineering Sciences. 2016; 12(185):92-103. (In Russian) DOI: https://doi.org/10.18522/2311-3103-2016-12-92103

[29] KVM. Available at: https://linux-kvm.org (accessed 16.01.2018).

[30] Song-Woo Sok, Young-Woo Jung, Cheol-Hun Lee. Optimized System Call Latency of ARM Virtual Machines. IOP Conf. Series: Journal of Physics: Conf. Series. 2017. Vol. 787, conference 1, id. 012032. DOI: https://doi.org/10.1088/1742-6596/787/1/012032

[31] Ruby. Available at: https://ruby-lang.org (accessed 16.01.2018).

Submitted 16.01.2018; Revised 10.03.2018; Published 30.03.2018.

СПИСОК ИСПОЛЬЗОВАННЫХ ИСТОЧНИКОВ

[1] Foster I., Kesseiman C. (Eds.) The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998. URL: https://dl.acm.org/citation.cfm?id=289914 (дата обращения: 16.01.2018).

[2] Кореньков В.В., Нечаевский А.В., Ососков Г.А., Пряхина Д.И., Трофимов В.В., Ужинский А.В. Моделирование Грид и облачных сервисов как важный этап их разработки // Системы и средства информатики. 2015. Т. 25, № 1. С. 4-19. DOI: https://doi.org/10.14357/08696527150101

[3] Grid technologies for large-scale projects / А. Dolbilov [et al.] // Proceedings of 2015 IEEE Conference Grid, Cloud High Performance Computing in Science (ROLCG). Cluj-Napoca, Romania, 2015. Pp. 1-3. DOI: https://doi.org/10.1109/ROLCG.2015.7367422

[4] Кореньков В.В., Нечаевский А.В., Ососков Г.А., Пряхина Д.И., Трофимов В.В., Ужинский А.В. Синтез процессов моделирования и мониторинга для развития систем хранения и обработки больших массивов данных в физических экспериментах // Компьютерные исследования и моделирование. 2015. Т. 7, № 3. С. 691-698. URL: http://crm.ics.org.ru/uploads/crmissues/crm_2015_3/15742.pdf (дата обращения: 16.01.2018).

[5] Beiov S.D., Dmitrienko P.V., Gaiaktionov V.V., Gromova N.I., Kadochnikov I.S., Korenkov V.V., Kutovskiy N.A., Mitsyn S.V., Mitsyn V.V., Oieynik D.A., Petrosyan A.S., Shabratova G.S., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Uzhinskiy A.V., Vaiova L, Zhemchugov A.S., Zhiitsov V.E. JINR Participation in the WLCG Project // LIT Scientific Report 2012 - 2013. Dubna: LIT JINR. Pp. 21-25. URL: http://lit.jinr.ru/Reports/SC_report_12-13/p21.pdf (дата обращения: 16.01.2018).

[6] Astakhov N.S., Beiov S.D., Dmitrienko P.V., Doibiiov A.G., Gorbunov I.N., Korenkov V.V., Mitsyn V.V., Shmatov S.V., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Zhiitsov V.E. JINR Tier-1 Center // LIT Scientific Report 2012-2013. Dubna: LIT JINR. Pp. 16-20. URL: http://lit.jinr.ru/Reports/SC_report_12-13/p16.pdf (дата обращения: 16.01.2018).

[7] Astakhov N.S., Baginyan A.S., Beiov S.D., Doibiiov A.G., Goiunov A.O., Gorbunov I.N., Gromova N.I., Kadochnikov I.S., Kashunin I.A., Korenkov V.V., Mitsyn V.V., Peievanyuk I.S., Shmatov S.V., Strizh T.A., Tikhonenko E.A., Trofimov V.V., Voitishin N.N., Zhiitsov V.E. JINR Tier-1 centre for the CMS experiment at LHC // Physics of Particles and Nuclei Letters. 2016. Vol. 13, issue 5. Pp. 714-717. DOI: https://doi.org/10.1134/S1547477116050046

Vol. 14, no. 1. 2018 ISSN2411-1473 sitito.cs.msu.ru

[8] Berezhnaya A., Dolbilov A, llyin V., Korenkov V., Lazin Y., Lyalin l., Mitsyn V., Ryabinkin E, Shmatov S., Strizh T, Tikhonenko E, Tkachenko l., Trofimov V., Velikhov V., Zhiltsov V. LHC Grid Computing in Russia: present and future // Journal of Physics: Conference Series. 2014. Vol. 513, Track 6, id. 062041. DOI: https://doi.Org/10.1088/1742-6596/513/6/062041

[9] Астахов Н.С., Белов С.Д., Горбунов И.Н., Дмитриенко П.В., Долбилов А.Г., Жильцов В.Е., Кореньков В.В., Мицын В.В., Стриж Т.А., Тихоненко Е.А., Трофимов В.В., Шматов С.В. Автоматизированная система уровня Tier-1 обработки данных эксперимента CMS // Информационные технологии и вычислительные системы. 2013. № 4. С. 27-36. URL: http://www.jitcs.ru/images/documents/2013-04/27_36.pdf (дата обращения: 16.01.2018).

[10] Filozova I.A., Bashashin M.V., Korenkov V.V., Kuniaev S.V., Musulmanbekov G., Semenov R.N., Shestakova G.V., Strizh T.A., Ustenko P.V., Zaikina T.N. Concept of JINR Corporate Information System // Physics of Particles and Nuclei Letters. 2016. Vol. 5, issue 13. Pp. 625-628. DOI: https://doi.org/10.1134/S15474

[11] OpenStack. URL: https://openstack.org (дата обращения: 16.01.2018).

[12] МаркеловА. Облачная операционная система OpenStack. Часть 1. Введение // Системный администратор. 2015. № 4(149). С. 16-19.

[13] Цветков В.Я., Дешко И.П. Облачный сервис // Образовательные ресурсы и технологии. 2016. № 3(15). С. 88-95. DOI: https://doi.org/2312-5500-2016-3-88-95

[14] Software - OpenNebula. URL: https://opennebula.org (дата обращения: 16.01.2018).

[15] Aiftimiei D.C., Fattibene E., Gargana R., Panella M., Salomoni D. Abstracting application deployment on Cloud infrastructures // IOP Conf. Series: Journal of Physics: Conference Series. 2017. Vol. 898, Track 6: Infrastructures, id. 082053. DOI: https://doi.org/10.1088/1742-6596/898/8Z082053

[16] Taylor R.P., Berghaus F., Brasolin F., Cordiero C.J.D., Desmarais R., Field L., Gable l., Giordano D., Girolamo A., Hover J., LeBlanc M., Love P., Paterson M., Sobie R., ZaytsevA. The Evolution of Cloud Computing in ATLAS // Journal of Physics: Conference Series. 2015. Vol. 664, Clouds and Virtualization, id. 022038. DOI: https://doi.org/10.1088/1742-6596/664/2/022038

[17] Bagnasco S., Bernazo D., Lusso S., Masera M., Vallero S. Managing competing elastic Grid and scientific computing applications using OpenNebula // Journal of Physics: Conference Series. 2015. Vol. 664, Clouds and Virtualization, id. 022004. DOI: https://doi.org/10.1088/1742-6596/664/2/022004

[18] Bagnasco S., Vallero S., Zaccolo V. A FairShare Scheduling Service for OpenNebula // Journal of Physics: Conference series. 2017. Vol. 898, Track 7: Middleware, Monitoring and Accounting, id. 092037. DOI: https://doi.org/10.1088/1742-6596/898/9/092037

[19] Кутовский Н.А., Нечаевский А.В., Ососков Г.А., Пряхина Д.И., Трофимов В.В. Моделирование межпроцессорного взаимодействия при выполнении MPI-приложений в облаке // Компьютерные исследования и моделирование. 2017. Т. 9, № 6. С. 955-963. DOI: https://doi.org/10.20537/2076-7633-2017-9-6-955-963

[20] OpenVZ. URL: https://openvz.org (дата обращения: 16.01.2018).

[21] Baranov A.V., Balashov N.A., Kutovskiy N.A., Semenov R.N. JINR cloud infrastructure evolution // Physics of Particles and Nuclei Letters. 2016. Vol. 13, issue 5. Pp. 672 - 675. DOI: https://doi.org/10.1134/S1547477116050071

[22] Баранов А.В., Кореньков В.В., Юрченко В.В., Балашов Н.А., Кутовский Н.А., Семёнов Р.Н., Свистунов С.Я. Подходы к интеграции облачных инфраструктур // Компьютерные исследования и моделирование. 2016. Т. 8, № 3. С. 583-590. URL: https://elibrary.ru/item.asp?id=26323286 (дата обращения: 16.01.2018).

[23] Korenkov V.V., Kutovskiy N.A., Balashov N.A., Baranov A.V., Semenov R.N. JINR cloud infrastructure // Procedia Computer Science. 2015. Vol. 66. Pp. 574-583. DOI: https://doi.org/10.1016/j.procs.2015.11.065

[24] Baranov A.V., Balashov N.A., Kutovskiy N.A., Semenov R.N. Cloud Infrastructure at JINR // Computer Research and Modeling. 2015. Vol. 7, №. 3. Pp. 463-467. URL: http://crm.ics.org.ru/uploads/crmissues/crm_2015_3/157zam.pdf (дата обращения: 16.01.2018).

[25] Использование облачных технологий в ЛИТ ОИЯИ / Н.А. Балашов, А.В. Баранов, Н.А. Кутовский, Р.Н. Семенов // Материалы Всероссийской конференции с международным участием «Информационно-телекоммуникационные технологии и математическое моделирование высокотехнологичных систем», 22-25 апреля 2014 г., М.: РУДН, 2014. С. 168-170.

[26] Облачная инфраструктура ЛИТ ОИЯИ / А.В. Баранов, Н.А. Балашов, Н.А. Кутовский, Р.Н. Семенов // Труды XVIII Международной научной конференции молодых ученых и специалистов к 105-летию Н.Н. Боголюбова (ОМУС-2014), 24 -28 февраля 2014 г., Дубна: ОИЯИ, 2014. С. 190-193.

[27] Cloud Technologies Application at JINR / N. Balashov [et al.] // Proceedings of the 8th international conference «Information Systems GRID Technologies», 30 - 31 May, 2014, Sofia, Bulgaria, 2014. Pp. 32 - 37.

[28] Балашов Н.А., Баранов А.В., Кадочников И.С., Кореньков В.В., Кутовский Н.А., Нечаевский А.В., Пелеванюк И.С. Программный комплекс интеллектуального диспетчирования и адаптивной самоорганизации виртуальных вычислительных ресурсов на базе облачного центра ЛИТ ОИЯИ // Известия ЮФУ. Технические науки. 2016. № 12(185). С. 92-103. DOI: https://doi.org/10.18522/2311-3103-2016-12-92103

[29] KVM. URL: https://linux-kvm.org (дата обращения: 16.01.2018).

[30] Song-Woo Sok, Young-Woo Jung, Cheol-Hun Lee. Optimized System Call Latency of ARM Virtual Machines // IOP Conf. Series: Journal of Physics: Conf. Series. 2017. Vol. 787, conference 1, id. 012032. DOI: https://doi.org/10.1088/1742-6596/787/1/012032

[31] Ruby. URL: https://ruby-lang.org (дата обращения: 16.01.2018).

Поступила 16.01.2018; принята к публикации 10.03.2018; опубликована онлайн 30.03.2018.

Об авторах:

Кореньков Владимир Васильевич, доктор технических наук, профессор, директор Лаборатории

Parallel and distributed programming, grid technologies, programming on GPUs

Том 14, № 1. 2018 ISSN2411-1473 sitito.cs.msu.ru

информационных технологий, Объединенный институт ядерных исследований (141980, Россия, Московская область, г. Дубна, ул. Жолио-Кюри, д. 6); ОЯСШ: http://orcid.org/0000-0002-2342-7862, korenkov@cv.jinr.ru

Кондратьев Андрей Олегович, инженер-программист, Лаборатория информационных технологий, Объединенный институт ядерных исследований (141980, Россия, Московская область, г. Дубна, ул. Жолио-Кюри, д. 6); ОЯСШ: http://orcid.org/0000-0001-6203-9160, konratyev@jinr.ru

This is an open access article distributed under the Creative Commons Attribution License which unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

РЕФАКТОРИНГ refactoring openvz opennebula ОБЛАЧНЫЕ ТЕХНОЛОГИИ cloud technologies ОБЛАЧНАЯ ИНФРАСТРУКТУРА cloud infrastructure
Другие работы в данной теме:
Контакты
Обратная связь
support@uchimsya.com
Учимся
Общая информация
Разделы
Тесты