WO2017196040A1 - Procédé et dispositif de gestion automatique de réseau - Google Patents

Procédé et dispositif de gestion automatique de réseau Download PDF

Info

Publication number
WO2017196040A1
WO2017196040A1 PCT/KR2017/004760 KR2017004760W WO2017196040A1 WO 2017196040 A1 WO2017196040 A1 WO 2017196040A1 KR 2017004760 W KR2017004760 W KR 2017004760W WO 2017196040 A1 WO2017196040 A1 WO 2017196040A1
Authority
WO
WIPO (PCT)
Prior art keywords
management server
network management
load balancing
base station
load
Prior art date
Application number
PCT/KR2017/004760
Other languages
English (en)
Korean (ko)
Inventor
라마사미부파티
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020170056833A external-priority patent/KR102309718B1/ko
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Priority to US16/097,056 priority Critical patent/US10965740B2/en
Publication of WO2017196040A1 publication Critical patent/WO2017196040A1/fr

Links

Images

Definitions

  • the present invention relates to a method and apparatus for managing a network in a communication system.
  • a network management server may be added to the set of network elements.
  • user intervention is required to assign a network management server to all network elements.
  • a network management server may be configured whenever the cell capacity of the network element increases. The user must manually perform a new network management server configuration, especially moving data from one network management server to another network management server, requiring data movement and effort.
  • the total number of managed cells is 30,000. In this case, if the number of cells increases to 9, the capacity increases by 300%, thus requiring two additional management servers. Current technology requires user intervention to scale up as capacity increases.
  • the present invention discloses a method in which a network system automatically performs setting of a network management server without user intervention.
  • a method of managing a network management server by a load balancer comprising: receiving information that a new base station has been added to a network; Transmitting information on the new base station to at least one network management server; Receiving processing time information for the new base station from the at least one network management server; And determining a network management server to which the new base station is allocated based on the processing time information. And receiving load balancing related information from the at least one network management server; And determining whether to load balance the at least one network management server based on the load balancing related information.
  • a load balancer for managing a network management server, comprising: a transceiver for transmitting and receiving signals with at least one network management server; A storage unit for storing data; And receive information that a new base station has been added to the network, transmit information about the new base station to at least one network management server, and receive processing time information for the new base station from the at least one network management server. And a controller configured to control a transceiver and to determine a network management server to which the new base station is allocated based on the processing time information. The controller may further control the transceiver to receive load balancing related information from the at least one network management server, and further determine whether to load balance the at least one network management server based on the load balancing related information. Characterized in that.
  • the network management server setting method it is possible to automatically and efficiently manage the network management server without user intervention.
  • FIG. 1 is a diagram illustrating a network structure including a current network management server (hereinafter, compatible with an MS).
  • FIG. 2 is a diagram illustrating a network structure including an MS according to the present invention.
  • FIG. 3 is a diagram illustrating a network for selecting an optimal network management server according to the present invention.
  • FIG. 4 is a flowchart illustrating a method for selecting an optimal network management server according to the present invention.
  • FIG. 5 is a flowchart illustrating a method of calculating operational efficiency.
  • FIG. 6 is a diagram illustrating a case where the load is adjusted when the load of the system CPU and / or memory exceeds a preset threshold.
  • FIG. 7 is a diagram illustrating an example of load balancing.
  • 8A and 8B show specific examples of load balancing.
  • FIG. 9 is a block diagram showing the configuration of a base station.
  • FIG. 10 is a block diagram showing the configuration of a network management server (or server node) or a load balancer.
  • each block of the flowchart illustrations and combinations of flowchart illustrations may be performed by computer program instructions. Since these computer program instructions may be mounted on a processor of a general purpose computer, special purpose computer, or other programmable data processing equipment, those instructions executed through the processor of the computer or other programmable data processing equipment may be described in flow chart block (s). It creates a means to perform the functions. These computer program instructions may be stored in a computer usable or computer readable memory that can be directed to a computer or other programmable data processing equipment to implement functionality in a particular manner, and thus the computer usable or computer readable memory. It is also possible for the instructions stored in to produce an article of manufacture containing instruction means for performing the functions described in the flowchart block (s).
  • Computer program instructions may also be mounted on a computer or other programmable data processing equipment, such that a series of operating steps may be performed on the computer or other programmable data processing equipment to create a computer-implemented process to create a computer or other programmable data. Instructions for performing the processing equipment may also provide steps for performing the functions described in the flowchart block (s).
  • each block may represent a portion of a module, segment, or code that includes one or more executable instructions for executing a specified logical function (s).
  • logical function e.g., a module, segment, or code that includes one or more executable instructions for executing a specified logical function (s).
  • the functions noted in the blocks may occur out of order.
  • the two blocks shown in succession may in fact be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending on the corresponding function.
  • ' ⁇ part' used in the present embodiment refers to software or a hardware component such as an FPGA or an ASIC, and ' ⁇ part' performs certain roles.
  • ' ⁇ ' is not meant to be limited to software or hardware.
  • ' ⁇ Portion' may be configured to be in an addressable storage medium or may be configured to play one or more processors.
  • ' ⁇ ' means components such as software components, object-oriented software components, class components, and task components, and processes, functions, properties, procedures, and the like. Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays, and variables.
  • the functionality provided within the components and the 'parts' may be combined into a smaller number of components and the 'parts' or further separated into additional components and the 'parts'.
  • the components and ' ⁇ ' may be implemented to play one or more CPUs in the device or secure multimedia card.
  • a new virtual management server is automatically automated in the background based on primary managed server operational efficiency key performance indicator (KPI). Can be added or removed.
  • KPI operational efficiency key performance indicator
  • the virtual management server performs network monitoring without moving network elements to other servers or shrinking or expanding base stations, which can replace the primary management server that users can now interact with or manipulate.
  • the cell capacity of the network can be increased without shrinking, expanding, or moving base stations to other management servers. No user intervention is required to manage the virtual management server, and the system automatically manages the virtual management server based on self-learning KPI-based operational efficiency.
  • FIG. 1 is a diagram illustrating a network structure including a current network management server (hereinafter, compatible with an MS).
  • an element management system may include a client 160, a master controller (MC) 150, and an MS 120, 130, and 140.
  • the client provides various interface functions (eg, a graphical user interface (GUI)) for network management and operation
  • the MC is a server for integrated management of the MS
  • the MS is a server for providing a base station interface and managing a base station.
  • GUI graphical user interface
  • the user when the new base station 110 is added to the network (s100), the user must directly select the MS, and add the necessary settings (s120). At this time, when the load of the MS becomes high, the user must manually move a specific base station from the MS currently managing the specific base station to another MS (s120).
  • FIG. 2 is a diagram illustrating a network structure including an MS according to the present invention.
  • the EMS may include a client, an MC, and an MS.
  • the MS may not exist as a server managing a plurality of base stations as in the case of FIG. 1, but may exist as a virtual server 220.
  • the virtual server may be a server or computer capable of performing a service such as the MS of FIG. 1, a personal computer, another system capable of supporting the functions of the MS, or a cloud.
  • the base station to which the system (which may be a separate load balancer or network management server) is automatically added according to the operational efficiency KPI, i.e., to maximize the operational efficiency. Or determine a network management server to handle the task according to the increased cell capacity.
  • the user does not need to select a network management server directly to add a base station.
  • the network management server may be a virtual server.
  • the remaining work may be redistributed among the network management servers in a manner of maximizing operational efficiency.
  • the system can add any number of virtual servers at any time, even if the user does not change the base station connection with the network management server, so that the system works in a management server or virtual server that can support the service as needed. Can be flexibly scaled or managed according to operational efficiency KPIs. The user does not have to change the settings directly to expand or reduce capacity.
  • the load balancer can monitor the network performance statistics data of the server node by checking the communication time between the base station and the server node, the location of the base station and the server node, and the current server load. Select the server node that is located, and determine the best server node for the base station.
  • the load balancer may store the order of server nodes for the added base station. In one example, if there is sufficient load for the first server node, a second server node may be allocated for the added base station. In order to achieve high operational efficiency, this information is maintained by the load balancer.
  • the network comprises a load balancer 300 and a network A 320, a network B 322 and a network C 324, which may include a plurality of base stations and network elements, respectively.
  • a plurality of manageable server nodes (first server node 310, second server node 312, and third server node 314), which may be an element management system (EMS) or a network management system (NMS) have.
  • EMS element management system
  • NMS network management system
  • All server nodes check the processing time and respond to the load balancer (S330). Thereafter, the load balancer selects an optimal server node based on the response, and maintains a preference list based on the response (S340).
  • FIG. 4 is a flowchart illustrating a method for selecting an optimal network management server according to the present invention.
  • the method of FIG. 4 is performed in the network of FIG. 3 will be described.
  • the load balancer 300 When a new base station 330 is added to the network, the load balancer 300 is notified that the new base station has been added (400). The load balancer 300 transmits the information on the new base station to all the server nodes in order to check the management task processing time (that is, the communication time between the base station and the server node) of each base station (410). Each server node receives the information, checks the processing time, and transmits the information to the load balancer (420).
  • the management task processing time that is, the communication time between the base station and the server node
  • processing time for each network of each server node may be as follows.
  • the processing time for the new base station per server node is T for the first server node, 2T for the second server node, and 3T for the third server node. 3T.
  • the load balancer selects an optimal server node based on the processing time information, allocates an optimal server node to manage a new base station, and stores the processing time information (430).
  • the server node capable of processing the new base station of the network A fastest is the first server node, and thus assigns the new base station to the first server node.
  • the load balancer stores and maintains the order of the server nodes for each base station and assigns the base station to the next optimal server node based on the processing time value. Specifically, in the case of FIG. 3, a new base station is first assigned to the first server node, and if the load of the first server node is already sufficient, it is assigned to the second server node, which is the next server node. This processing time information will also be used for load balancing.
  • the total processing time S for the performance statistical data in one interval period S may be a time obtained by adding the time required for communication with the base station and the parsing time of the base station data. That is, it may be as shown in Equation 1 below.
  • N 10000
  • S may be 10,000 P + CT.
  • the total processing time S for network A of the first server node is 10000 P + 10000 T.
  • the server node manages the base stations of each network irrespective of the processing time (for example, each server node manages the base stations of networks A, B, and C by 1/3)
  • the server node After the base station is added to the management target of the server node, the server node can calculate its operational efficiency and send the detailed information to the load balancer. If necessary, the server node or the load balancer may determine load balancing, and the load balancer may perform load balancing.
  • FIG. 5 is a flowchart illustrating a method of calculating operational efficiency.
  • the server node determines a fixed interval (FI) that will be an operational efficiency to be performed (500).
  • the fixed interval may be determined by the load balancer or the network, or may be based on a predetermined value.
  • the server node obtains the total number N_initial of base stations managed by the server node at the start of the fixed period (505).
  • N_initial means the sum of the total number of base stations managed by the server node and N3, which is the number of base stations of the previous section whose processing is delayed to the current fixed section because the processing is not completed in the previous fixed section.
  • the server node calculates the elapsed time (E) and the remaining time (R) based on the current in the fixed section (510).
  • E + R FI may be.
  • the server node calculates the number N1 of base stations that have already been processed for the current fixed period (N1) and the number N2 of base stations that have not been processed (515). At this time, N_initial is N1 + N2.
  • the server node performs the base station processing and the load of at least one of the system CPU or memory of the server node is greater than a preset threshold, after checking the memory or CPU load of the server node, the number of base stations to be processed simultaneously is reduced by 5%. Can be. 5% is an exemplary value, the server node can adjust the number to reduce the number of base stations to be processed simultaneously. This process is repeated until the system is normal, i.e., the load on the system CPU or memory is lower than the preset threshold. If the system returns to normal, proceed to the next step. This process can affect operational efficiency.
  • the server node calculates the operational efficiency (O) to be achieved for the remaining time according to Equation 2 below (520).
  • R may be in units of minutes.
  • N means the number of unprocessed base stations remaining to be monitored by the server node at the current time.
  • the server node then calculates the operational efficiency (G) already achieved at the elapsed time, according to Equation 3 below.
  • E may be in minutes. This is the number of base stations already processed, divided by the elapsed time, which can be regarded as the efficiency during the elapsed time.
  • Load balancing related information including the FI, E, R, N, N1, N2, N3, O and G is calculated at the server node and passed to the load balancer, or some information is passed to the load balancer to load balance Can be calculated on the device.
  • the server node or load balancer determines whether G is greater than 0 (530). If the achieved operational efficiency (G) is greater than the operational efficiency (O) to be achieved, the server node or load balancer does not need to perform load balancing. If not, load balancing should be performed as follows.
  • the server node or the load balancer calculates the number of base stations N4 necessary for load balancing as shown in Equation 4 below (535).
  • N_initial is the total number of base stations to be processed during this fixed interval
  • N1 + G * R is the number of base stations already processed and the number of base stations that can be processed for the remaining time.
  • N4 must be processed by another server node for load balancing. It means the number of base stations.
  • the server node or load balancer then calculates the lack of operational degradation (D) according to Equation 5 below.
  • Operational degradation rate D is calculated in%. Based on the D value, the server node or load balancer determines 544 whether to load balance. In addition, the server node or the load balancer may determine whether to load balance in consideration of the D values of the plurality of fixed intervals. This has the effect of preventing unwanted load balancing. For example, if D values of a plurality of sections are considered, load balancing will not be performed if only one section has a load and the other sections are normal. If the continuous load appears within the fixed section, the load balancer performs load balancing (550).
  • the server node When D is adjusted by performing load balancing, the server node observes the next fixed section (FI) and updates the number of base stations that have not been processed in the previous fixed section to N2.
  • the server node transmits to the load balancer information N4 (number of base stations to be distributed) information of the base stations which are not monitored in the base station list.
  • N4 number of base stations to be distributed
  • the value of N1 + N2 becomes the limit value of N4 where load balancing is performed at the value of N_initial. (In particular, there is an effect that N1 to N4 base stations are removed.)
  • the server node will then reset 0 to a load balancing list of size N4.
  • the above steps are repeated for each section of the operation.
  • the set section may be a section existing within the fixed section or may be any set value from 1 to S.
  • FIG. 6 is a diagram illustrating a case where the load is adjusted when the load of the system CPU and / or memory exceeds a preset threshold.
  • the preset threshold is 90% (this is only an example) and when the system CPU or / or memory load is greater than 90%, the system adjusts the load to maintain the balance and stability of the system.
  • the server node may request the load balancer to load balance.
  • FIG. 7 is a diagram illustrating an example of load balancing.
  • the fixed interval FI is t1-t0 700, and all data of N eNBs must be processed in the interval. However, if in fact N eNBs were not processed (i.e., operational degradation was identified), then load balancing is performed at 710. At this point, if the operational degradation is higher than the set threshold, load balancing is performed. Since the system load is adjusted in the t2-t3 section and the t3-t4 section, the data of N eNBs can be processed in the fixed section.
  • the fixed period FI is 15 minutes
  • the number N of base stations to be processed per fixed period is 10,000.
  • the N_initial of the fixed section 1 800 becomes 10,000.
  • the number N1 of the processed base stations increases, the number N2 of the unprocessed base stations decreases, and N1 + N2 becomes 10,000.
  • the fixed period ends, but at this time, the number N2 812 of base stations whose processing is not completed is 556.
  • N3 of the next fixed section (the number of base stations of the previous section, which has not been processed in the previous fixed section and has been delayed to the current fixed section) is 556.
  • N_initial becomes 10,000 + 556 to 10,556. Thereafter, every minute, the number of base stations N1 whose data has been processed by the base station is increased, the number of base stations whose processing is not completed, N2 is decreased, and N1 + N2 is 10,556.
  • the server node or load balancer then calculates the operational efficiency O to be achieved according to FIG. 5, the operational efficiency G already achieved and the number of base stations N4 required for load balancing, and finally the operational degradation rate D. Assuming that the threshold of D is set to 15%, then at 10:25 (860), when the operational deterioration rate D is greater than 15% (862), load balancing will be performed.
  • the load is distributed 864 for 1597 base stations, and then the N1 value 872 becomes a limit value for the number of the base stations to which the load is distributed. Operational deterioration rate D is then lowered to less than 15% (874). That is, the system load is normal.
  • the system is overloaded after time 10:01, but the system is optimal to check the operating degradation rate to avoid unwanted load balancing and to perform load balancing based on a determination of whether the operating degradation rate is above a threshold. Will determine the time.
  • FIG. 9 is a block diagram showing the configuration of a base station.
  • the base station 900 may include a transceiver 910 and a controller 920.
  • the transceiver 910 transmits and receives data and control information between the base station and the network management server (or server node) according to an embodiment of the present invention, and the controller 920 controls the operation.
  • FIG. 10 is a block diagram showing the configuration of a network management server (or server node) or a load balancer.
  • the network management server (or server node) or the load balancer 1000 may include a transceiver 1010, a controller 1020, and a storage 1030.
  • the transceiver 1010 of the network management server transmits and receives data and control information with the base station, and the controller 1020 controls the operation.
  • the storage unit 1030 may store data for the base station.
  • the controller 1010 may calculate processing time for each base station (or network) and perform calculation related to load balancing. Specifically, the procedure disclosed in FIG. 5 may be performed.
  • the transceiver 1020 may transmit the processing time information to the load balancer, and calculate a value related to load balancing and notify the load balancer.
  • the storage unit 1030 may store the data to be transmitted and received.
  • the controller 1020 of the load balancer may control the transceiver 1010 to transmit information about the new base station to the network management server when a new base station is added to the network.
  • the optimal network management server for the base station can be determined based on the communication time between the base station and the network management server, the location of the network management server, the load of the current network management apparatus, and the like.
  • it is possible to determine whether to perform load balancing by receiving values related to operational efficiency from the network management server.
  • the controller may directly perform calculation according to the embodiment of FIG. 5 or receive parameters from the network management server from the network management server to determine whether to load balance.
  • the transceiver 1020 may receive the information from the network management server, and notify the network management server of load balancing and the number of base stations to be distributed.
  • the storage unit 1030 may store the information and parameters to be transmitted and received.
  • no user intervention is required when the base station is added or the capacity is increased in the network element, and the system is automatically processed to prevent data loss.
  • no user intervention is required, which reduces operating costs and allows self-resilience to make efficient use of resources.

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne un procédé permettant à un système de réseau d'effectuer automatiquement une configuration d'un serveur de gestion de réseau sans intervention de l'utilisateur et un procédé permettant à un dispositif de distribution de charge de gérer le serveur de gestion de réseau consistant : à recevoir des informations indiquant qu'une nouvelle station de base est ajoutée à un réseau; à transmettre les informations concernant la nouvelle station de base à au moins un serveur de gestion de réseau; à recevoir des informations de temps de traitement concernant la nouvelle station de base en provenance desdits serveurs de gestion de réseau; et à déterminer un serveur de gestion de réseau auquel la nouvelle station de base doit être attribuée, en fonction des informations de temps de traitement.
PCT/KR2017/004760 2016-05-09 2017-05-08 Procédé et dispositif de gestion automatique de réseau WO2017196040A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/097,056 US10965740B2 (en) 2016-05-09 2017-05-08 Method and device for automatically managing network

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662333419P 2016-05-09 2016-05-09
US62/333,419 2016-05-09
KR10-2017-0056833 2017-05-04
KR1020170056833A KR102309718B1 (ko) 2016-05-09 2017-05-04 네트워크를 자동적으로 관리하는 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2017196040A1 true WO2017196040A1 (fr) 2017-11-16

Family

ID=60267713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/004760 WO2017196040A1 (fr) 2016-05-09 2017-05-08 Procédé et dispositif de gestion automatique de réseau

Country Status (1)

Country Link
WO (1) WO2017196040A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110311813A (zh) * 2019-06-25 2019-10-08 贵阳海信网络科技有限公司 一种轨道综合网管的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060189317A1 (en) * 2005-02-18 2006-08-24 Oki Electric Industry Co., Ltd. Mobile communication system and wireless base station device
US20090323590A1 (en) * 2008-06-25 2009-12-31 Kyocera Corporation Wireless communication system, base station, management server, and wireless communication method
US20130150056A1 (en) * 2011-12-12 2013-06-13 Samsung Electronics Co., Ltd. Mobile communication system and base station identifier management method thereof
US20140185588A1 (en) * 2011-09-06 2014-07-03 Huawei Technologies Co., Ltd. Method for configuring a neighboring base station and micro base station
US20140301347A1 (en) * 2011-12-27 2014-10-09 Panasonic Corporation Server device, base station device, and identification number establishment method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060189317A1 (en) * 2005-02-18 2006-08-24 Oki Electric Industry Co., Ltd. Mobile communication system and wireless base station device
US20090323590A1 (en) * 2008-06-25 2009-12-31 Kyocera Corporation Wireless communication system, base station, management server, and wireless communication method
US20140185588A1 (en) * 2011-09-06 2014-07-03 Huawei Technologies Co., Ltd. Method for configuring a neighboring base station and micro base station
US20130150056A1 (en) * 2011-12-12 2013-06-13 Samsung Electronics Co., Ltd. Mobile communication system and base station identifier management method thereof
US20140301347A1 (en) * 2011-12-27 2014-10-09 Panasonic Corporation Server device, base station device, and identification number establishment method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110311813A (zh) * 2019-06-25 2019-10-08 贵阳海信网络科技有限公司 一种轨道综合网管的方法及装置

Similar Documents

Publication Publication Date Title
US10965740B2 (en) Method and device for automatically managing network
WO2013073876A1 (fr) Procédé et dispositif pour distribuer un équipement d'utilisateur inactif dans un système de communication mobile basé sur des porteuses multiples
EP3016316A1 (fr) Procédé et appareil de commande de réseau
CN103117947B (zh) 一种负载分担方法及装置
JP2015149578A (ja) 運用管理装置
WO2014133357A1 (fr) Procédé et appareil de surveillance de l'état d'une connexion à l'internet dans un système de communication sans fil
KR20140106235A (ko) 오픈플로우 스위치 및 그 패킷 처리 방법
US9325169B2 (en) Telecommunications equipment, power supply system, and power supply implementation method
WO2020231078A1 (fr) Améliorations affectant et se rapportant à l'analytique de données dans un réseau de télécommunication
US20180139655A1 (en) Management device and management method thereof for cloud radio access network and user equipment
CN105554099A (zh) 一种采集服务器负载均衡的方法及装置
CN112671813B (zh) 服务器确定方法、装置、设备及存储介质
WO2015064850A1 (fr) Procédé et appareil de gestion de mémoire tampon pour une communication de bus série universel dans un environnement sans fil
WO2017196040A1 (fr) Procédé et dispositif de gestion automatique de réseau
WO2013147441A1 (fr) Appareil et procédé de programmation, pour équilibrer une charge lors de l'exécution d'une pluralité d'opérations de transcodage
CN103595736A (zh) 视频监控系统中的访问请求处理方法和装置
WO2014017703A1 (fr) Système et procédé d'allocation de ressources sans fil
CN105450697A (zh) 一种多设备同屏共享方法、装置及服务器
JP2012235400A (ja) スイッチング装置およびスイッチング装置のエージング方法
WO2019066101A1 (fr) Procédé de distribution de nœuds, et serveur de gestion exécutant le procédé
WO2013085089A1 (fr) Procédé d'utilisation de ressource de réseau de communication dans un environnement de nuage m2m et système correspondant
JP2020022053A (ja) 通信制御装置
CN111294406B (zh) 软件定义网络控制器混合映射方法
WO2021086121A1 (fr) Appareil et procédé de commande de terminal
WO2019035499A1 (fr) Système et procédé de traitement de service saas hybride dans une plateforme de service saas hybride fondée sur la demande des utilisateurs utilisant une mise à l'échelle automatique

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17796340

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17796340

Country of ref document: EP

Kind code of ref document: A1