CN116737391A - Edge computing cooperation method based on mixing strategy in federal mode - Google Patents

Edge computing cooperation method based on mixing strategy in federal mode Download PDF

Info

Publication number
CN116737391A
CN116737391A CN202310827271.3A CN202310827271A CN116737391A CN 116737391 A CN116737391 A CN 116737391A CN 202310827271 A CN202310827271 A CN 202310827271A CN 116737391 A CN116737391 A CN 116737391A
Authority
CN
China
Prior art keywords
network
agent
edge
federal
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310827271.3A
Other languages
Chinese (zh)
Inventor
叶昌奥
宋晓勤
雷磊
张莉涓
蔡圣所
朱晓浪
牛凯华
李慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202310827271.3A priority Critical patent/CN116737391A/en
Publication of CN116737391A publication Critical patent/CN116737391A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The research is developed around the joint collaboration technology between unmanned aerial vehicles in a multi-unmanned aerial vehicle assisted three-layer MEC (Mobile Edge Computing) system, the problems that the freshness of information faced by a current time-sensitive computing task in the unloading process is insufficient and experience sharing between unmanned aerial vehicles is difficult due to data heterogeneity and equipment heterogeneity are solved, and an edge computing collaboration method based on a mixing strategy in a federal mode is provided. The method aims at minimizing the overall average information age of the system, integrates state information such as local observation, edge buffer state, edge unloading condition and the like of a data source under a multi-agent DRL framework, adopts a mixed strategy of heterogeneous input learning track planning, data scheduling and resource allocation, and achieves the aim of optimizing the system average AoI by enabling an unmanned aerial vehicle to move to an area with more data packets and larger AoI and adopting corresponding data processing and unloading operation. Meanwhile, a novel federal learning algorithm is introduced into multi-unmanned aerial vehicle cooperation, non-uniform equipment selection and parameter aggregation are carried out in a certain period, and the purpose is to enable equipment with higher contribution degree to participate in parameter aggregation more and reduce communication times. Simulation results prove that the method is excellent in improving the information freshness in the unloading process of the edge computing task and overcoming the influence of system heterogeneity.

Description

Edge computing cooperation method based on mixing strategy in federal mode
Technical Field
The invention belongs to the field of edge calculation, and particularly relates to a method for reducing time-sensitive data or information age when a task is unloaded by an edge calculation system assisted by multiple unmanned planes.
Background
Thanks to the development of mobile communication and internet of things, user equipment is beginning to support more and more diverse applications and services such as augmented reality, face recognition, online games, and unmanned driving. Such computationally intensive and time sensitive services place high demands on the computational power of the user equipment. For some resource-constrained user devices, handling such traffic presents a significant challenge. The advent of mobile cloud computing provides a solution. It has strong usability, expansibility and adaptability. The user equipment can improve the problem of self resource limitation by offloading tasks to the cloud server with rich computing resources, thereby enhancing the computing capability of the user equipment. However, mobile cloud computing also has problems. Firstly, the response time of a centralized cloud architecture is long, and the experience quality of a terminal user is affected; secondly, a relatively long communication distance between the user and the cloud server introduces relatively high data delay, and the requirement of a delay-sensitive real-time application program cannot be met. In order to solve the problem of mobile cloud computing, computing resources originally located in the cloud end need to be further moved to an edge node closer to the user equipment, so that some devices requiring low latency and mobility support obtain better services, and thus mobile edge computing (MEC, mobile edge computing) is produced. As a new calculation paradigm, MEC has the characteristics of low delay, high bandwidth, location awareness, security, privacy protection and the like, so that the MEC is more advantageous in the wide application and service. Especially when providing service for some high time-sensitive services, the edge node composed of the base station and the server can allocate resources such as calculation, communication, data storage and the like for the edge node nearby, thereby reducing the time and cost of data transmission and improving the system performance.
Currently, there has been extensive research on task offloading and resource allocation problems in MEC systems. However, most work only considers systems with fixed location MEC servers, and flexible MEC designs can have better performance when faced with some highly dynamic environments. In recent years, unmanned aerial vehicles (UAV, unmanned aerial vehicles) have made tremendous progress in assisting wireless communications. Because of the flexibility of the unmanned aerial vehicle, the unmanned aerial vehicle can be used as an air platform to construct an MEC server, so that the system performance can be improved, and the unmanned aerial vehicle is more suitable for a plurality of actual scenes. Therefore, the method has important research value and application prospect in the aspects of calculation unloading, resource allocation, track planning and the like aiming at the MEC system assisted by the unmanned aerial vehicle. Meanwhile, an unmanned aerial vehicle-assisted MEC system also faces the challenges of complex scene dynamics and difficult long-term optimization. There are several studies currently addressing these problems by deep reinforcement learning (DRL, deep reinforcement learning). Wang et al studied a cluster with multiple wireless users, multiple drones and MEC servers, targeting minimizing the total system cost, and based on model-free DRL algorithms, studying the optimal offloading strategy. Zhu et al studied the problem of providing the shortest average task response time in a multi-unmanned aerial vehicle assisted edge computing scenario, comprehensively considering unmanned aerial vehicle task allocation, bandwidth allocation, and energy constraints, optimizing offloading strategies based on a multi-agent reinforcement learning algorithm (MARL, multi-agent reinforcement learning). Jing et al consider a single unmanned aerial vehicle assisted MEC system, regarding the age of information (AoI, age of information) as a quality of service (QoS, quality of service) indicator when the unmanned aerial vehicle serves a user, and optimizing the offloading scheme and the flight trajectory of the unmanned aerial vehicle based on a dual-depth Q network, improving QoS and task freshness.
Although the DRL algorithm has the advantages of high efficiency, strong robustness and the like, most of the DRL algorithms are centralized, which causes privacy and security problems of user data, and under a large-scale application scene, the communication load of the whole system is further increased. In recent years, federal learning (FL, federated learning) has been used as a new technology derived from artificial intelligence and blockchain technologies, and is characterized in that multiple clients can coordinate training in a decentralised manner without sharing local data, share experience, and further improve performance of an artificial intelligence model. Nie et al propose a centralized MARL algorithm for the power minimization problem of a multi-drone MEC system, and introduce FL that enables user equipment to offload computation-sensitive tasks to an optimal drone in an energy-efficient manner by jointly optimizing resource allocation, user association and power control. Zhu et al propose a policy-based MARL framework for age-sensitive multi-unmanned assisted MEC systems, introducing edge FL patterns into multi-agent collaboration to develop corresponding online algorithms.
The invention considers the influence of data heterogeneity and equipment heterogeneity in the message age sensitive MEC system assisted by a plurality of unmanned aerial vehicles, can effectively reduce the average information age and the peak information age of the system and improve the utilization rate of data packets in the system while meeting the requirement of user equipment on high dynamic tasks.
Disclosure of Invention
The invention aims to: aiming at the scene of insufficient information freshness of time-sensitive computing tasks or data in an unloading process in a multi-unmanned aerial vehicle-assisted MEC system, the edge computing cooperation method based on a hybrid strategy in a federal mode is provided, and the method can effectively cope with system heterogeneity and reduce the average information age of the whole system. In order to achieve the object, the invention adopts the following steps:
step 1: constructing an edge federal multi-agent actor-critic framework;
step 2: deep learning neural network for acquiring movement actions
Step 3: introducing a novel federal learning algorithm in multi-agent deep reinforcement learning to enable the multi-agent to share experience, cooperatively process and unload tasks;
step 4: training is performed in the framework until the mean information age of the system is kept at a low level and the network model reaches convergence.
The specific method for constructing the edge federal multi-agent actor-critic framework comprises the following steps:
and constructing a dual-active-critical network, wherein the target active-critical network has the same structure as the original network, and predicts future actions based on the state of the next time slot, and the parameters of the target active-critical network are updated from the original network in a certain period. The intelligent agents interact with the environment to obtain experience data, the obtained experiences are cached, each intelligent agent plays back the experiences by using the cached experiences, and network parameters of the intelligent agents are updated. Meanwhile, in order to improve the system performance and stability in the training process and overcome the influence of system heterogeneity, a novel federal learning algorithm is adopted to aggregate parameters of the intelligent agents, each intelligent agent periodically uploads information such as network gradient, weight and the like to a cloud center, the cloud center calculates probability distribution of each device according to the information, parameters of more important devices are selected, the parameters are aggregated to generate a global model, and the global model is issued to all intelligent agents to continue training.
The specific method for constructing the deep learning neural network for the actor-critic network comprises the following steps:
for an edge actor network, a multiple-input-output neural network is established for each device, a convolutional neural network with an average pool is used for acquiring movement actions through integrating local observation, edge buffer state and unloading channel state of a data source, and a multi-layer perceptron is used for executing edge and unloading scheduling. The critic network on each edge agent approximates an action cost function with the current integration state and actions as inputs. In particular, local observations of a data source are formatted asMapping (I)>Representing the observation radius, the observation graph may be considered as a 2-channel image, where the third dimension refers to the aggregate size and information age of the data packets observed by the edge device. A neural network based on a cellular neural network was employed to extract regions with larger data packets and higher AoI. Then by mean pooling +.>Projection of the observation map ontoOn the exercise diagram, to determine the track motion, +.>Other inputs and buffer states are formatted as lists and scalars representing the radius of movement of the agent, and data scheduling vectors are output by the MLP network.
A novel federal learning algorithm is introduced in multi-agent deep reinforcement learning, so that the multi-agent can share experience, and the task is cooperatively processed and unloaded, and the specific method is as follows:
the parameters are aggregated in a certain period, and the loss function F is not directly minimized during the partial update of the round agents needing aggregation k (w t ) Instead, a redundancy term is added to the loss function, and the loss function with the redundancy term added is comparedMinimizing, and after all agents complete local update, gradient the networkNetwork weight->Uploading to a cloud center, and calculating the ++of each agent by the cloud center>Value, each parameter aggregation tends to choose to have a high +.>The cloud center calculates the optimal selection probability distribution of all the devices>Then, sampling K times according to the distribution to obtain a set C containing K devices t K is the number of devices participating in parameter aggregation, and the parameters are aggregated to obtain a global model w t+1 After that, the cloud center sends it to C t The device in (a) performs network update.
The edge computing cooperation method based on the hybrid strategy in the federal mode is realized in Pycharm, and the simulation result verifies the superiority of the scheme. Firstly, the scheme can effectively influence data heterogeneity and equipment heterogeneity, meets the requirements of real scenes, and secondly, the scheme reduces the average information age and the peak information age of the system and improves the processing efficiency of the system on data packets. The comparison result with other schemes shows that the scheme has excellent performance in all performances.
Drawings
Fig. 1 is a schematic view of a multi-drone assisted three-layer MEC scenario of the present invention;
FIG. 2 is an edge federal multi-agent actor-critic framework diagram of a hybrid strategy-based edge computation collaboration method in federal mode of the present invention;
FIG. 3 is a system average information age schematic of a hybrid policy based edge computing collaboration method in federal mode of the present invention;
FIG. 4 is a system peak information age schematic of a hybrid policy based edge computation collaboration method in federal mode of the present invention;
FIG. 5 is a schematic diagram of the system reception data of the hybrid policy based edge computing cooperation method in federal mode of the present invention;
fig. 6 is a schematic view of a flight trajectory of an unmanned aerial vehicle based on a hybrid strategy edge calculation collaboration method in federal mode of the present invention;
Detailed Description
The invention is described in further detail below with reference to the drawings and examples.
The specific method for constructing the edge federal multi-agent actor-critic framework shown in the figure 2 comprises the following steps:
and constructing a dual-active-critical network, wherein the target active-critical network has the same structure as the original network, and predicts future actions based on the state of the next time slot, and the parameters of the target active-critical network are updated from the original network in a certain period. The intelligent agents interact with the environment to obtain experience data, the obtained experiences are cached, each intelligent agent plays back the experiences by using the cached experiences, and network parameters of the intelligent agents are updated. Meanwhile, in order to improve the system performance and stability in the training process and overcome the influence of system heterogeneity, a novel federal learning algorithm is adopted to aggregate parameters of the intelligent agents, each intelligent agent periodically uploads information such as network gradient, weight and the like to a cloud center, the cloud center calculates probability distribution of each device according to the information, parameters of more important devices are selected, the parameters are aggregated to generate a global model, and the global model is issued to all intelligent agents to continue training.
The specific method for constructing the deep learning neural network for the actor-critic network comprises the following steps:
for an edge actor network, a multiple-input-output neural network is established for each device, a convolutional neural network with an average pool is used for acquiring movement actions through integrating local observation, edge buffer state and unloading channel state of a data source, and a multi-layer perceptron is used for executing edge and unloading scheduling. The critic network on each edge agent approximates an action cost function with the current integration state and actions as inputs. In particular, local observations of a data source are formatted asMapping (I)>Representing the observation radius, the observation graph may be considered as a 2-channel image, where the third dimension refers to the aggregate size and information age of the data packets observed by the edge device. A neural network based on a cellular neural network was employed to extract regions with larger data packets and higher AoI. Then by mean pooling +.>Projection of the observation map ontoOn the exercise diagram, to determine the track motion, +.>Other inputs and buffer states are formatted as lists and scalars representing the radius of movement of the agent, and data scheduling vectors are output by the MLP network.
A novel federal learning algorithm is introduced in multi-agent deep reinforcement learning, so that the multi-agent can share experience, and the task is cooperatively processed and unloaded, and the specific method is as follows:
the parameters are aggregated in a certain period, and the loss function F is not directly minimized during the partial update of the round agents needing aggregation k (w t ) Instead, a redundancy term is added to the loss function, and the loss function with the redundancy term added is comparedMinimizing, and after all agents complete local update, gradient the networkNetwork weight->Uploading to a cloud center, and calculating ++of each agent by the cloud center according to the following formula>Value of
Wherein the method comprises the steps ofIs the global gradient, psi is the super-parameter,
each parameter aggregation tends to be chosen to have a highApparatus for values, calculation of optimal selection probability distribution as follows
The cloud center calculates the optimal selection probability distribution of all the devicesThen, sampling K times according to the distribution to obtain a set C containing K devices t K is the number of devices participating in parameter aggregation, and a specific aggregation rule is as follows
Wherein the method comprises the steps of
Obtaining a global model w t+1 After that, the cloud center sends it to C t The device in (a) performs network update.
What is not described in detail in the present application belongs to the prior art known to those skilled in the art.

Claims (4)

1. The edge computing cooperation method based on the mixing strategy in the federal mode comprises the following steps:
step 1: constructing an edge federal multi-agent actor-critic framework;
step 2: deep learning neural network for acquiring movement actions
Step 3: introducing a novel federal learning algorithm in multi-agent deep reinforcement learning to enable the multi-agent to share experience, cooperatively process and unload tasks;
step 4: training is performed in the framework until the mean information age of the system is kept at a low level and the network model reaches convergence.
2. The method of claim 1, wherein the method of constructing the edge federal multi-agent actor-critic framework is:
and constructing a dual-active-critical network, wherein the target active-critical network has the same structure as the original network, and predicts future actions based on the state of the next time slot, and the parameters of the target active-critical network are updated from the original network in a certain period. The intelligent agents interact with the environment to obtain experience data, the obtained experiences are cached, each intelligent agent plays back the experiences by using the cached experiences, and network parameters of the intelligent agents are updated. Meanwhile, in order to improve the system performance and stability in the training process and overcome the influence of system heterogeneity, a novel federal learning algorithm is adopted to aggregate parameters of the intelligent agents, each intelligent agent periodically uploads information such as network gradient, weight and the like to a cloud center, the cloud center calculates probability distribution of each device according to the information, parameters of more important devices are selected, the parameters are aggregated to generate a global model, and the global model is issued to all intelligent agents to continue training.
3. The method of claim 1, wherein the method for constructing the deep learning neural network for the actor-critic network comprises the following steps:
for an edge actor network, a multiple-input-output neural network is established for each device, a convolutional neural network with an average pool is used for acquiring movement actions through integrating local observation, edge buffer state and unloading channel state of a data source, and a multi-layer perceptron is used for executing edge and unloading scheduling. The critic network on each edge agent approximates an action cost function with the current integration state and actions as inputs. In particular, local observations of a data source are formatted asMapping (I)>Representing the observation radius, the observation graph may be considered as a 2-channel image, where the third dimension refers to the aggregate size and information age of the data packets observed by the edge device. A neural network based on a cellular neural network was employed to extract regions with larger data packets and higher AoI. Then by mean pooling +.>Projection of the observation map ontoOn the exercise diagram, to determine the track motion, +.>Other inputs and buffer states are formatted as lists and scalars representing the radius of movement of the agent, and data scheduling vectors are output by the MLP network.
4. The method of claim 1, wherein a novel federal learning algorithm is introduced in multi-agent deep reinforcement learning, and the multi-agent sharing experience, the method of cooperatively processing and offloading tasks is as follows:
the parameters are aggregated in a certain period, and the loss function F is not directly minimized during the partial update of the round agents needing aggregation k (w t ) Instead, a redundancy term is added to the loss function, and the loss function with the redundancy term added is comparedMinimizing, and after all agents complete local update, gradient the networkNetwork weight->Uploading to a cloud center, and calculating ++of each agent by the cloud center according to the following formula>Value of
Wherein the method comprises the steps ofIs the global gradient, psi is the super-parameter,
each parameter aggregation tends to be chosen to have a highApparatus for values, calculation of optimal selection probability distribution as follows
The cloud center calculates the optimal selection probability distribution of all the devicesThen, sampling K times according to the distribution to obtain a set C containing K devices t K is the number of devices participating in parameter aggregation, and a specific aggregation rule is as follows
Wherein the method comprises the steps of
Obtaining a global model w t+1 After that, the cloud center sends it to C t The device in (a) performs network update.
CN202310827271.3A 2023-07-06 2023-07-06 Edge computing cooperation method based on mixing strategy in federal mode Pending CN116737391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310827271.3A CN116737391A (en) 2023-07-06 2023-07-06 Edge computing cooperation method based on mixing strategy in federal mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310827271.3A CN116737391A (en) 2023-07-06 2023-07-06 Edge computing cooperation method based on mixing strategy in federal mode

Publications (1)

Publication Number Publication Date
CN116737391A true CN116737391A (en) 2023-09-12

Family

ID=87916899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310827271.3A Pending CN116737391A (en) 2023-07-06 2023-07-06 Edge computing cooperation method based on mixing strategy in federal mode

Country Status (1)

Country Link
CN (1) CN116737391A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117610644A (en) * 2024-01-19 2024-02-27 南京邮电大学 Federal learning optimization method based on block chain

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117610644A (en) * 2024-01-19 2024-02-27 南京邮电大学 Federal learning optimization method based on block chain
CN117610644B (en) * 2024-01-19 2024-04-16 南京邮电大学 Federal learning optimization method based on block chain

Similar Documents

Publication Publication Date Title
CN112351503B (en) Task prediction-based multi-unmanned aerial vehicle auxiliary edge computing resource allocation method
Cui et al. Latency and energy optimization for MEC enhanced SAT-IoT networks
CN112118287B (en) Network resource optimization scheduling decision method based on alternative direction multiplier algorithm and mobile edge calculation
CN111757361B (en) Task unloading method based on unmanned aerial vehicle assistance in fog network
CN114051254B (en) Green cloud edge collaborative computing unloading method based on star-ground fusion network
CN113254188B (en) Scheduling optimization method and device, electronic equipment and storage medium
CN113114738B (en) SDN-based optimization method for internet of vehicles task unloading
CN114205353B (en) Calculation unloading method based on hybrid action space reinforcement learning algorithm
CN116737391A (en) Edge computing cooperation method based on mixing strategy in federal mode
Du et al. Maddpg-based joint service placement and task offloading in MEC empowered air-ground integrated networks
Zheng et al. Digital twin empowered heterogeneous network selection in vehicular networks with knowledge transfer
CN115209426A (en) Dynamic deployment method of digital twin servers in edge internet of vehicles
Parvaresh et al. A continuous actor–critic deep Q-learning-enabled deployment of UAV base stations: Toward 6G small cells in the skies of smart cities
CN114980126A (en) Method for realizing unmanned aerial vehicle relay communication system based on depth certainty strategy gradient algorithm
Huda et al. Deep reinforcement learning-based computation offloading in uav swarm-enabled edge computing for surveillance applications
CN114626298A (en) State updating method for efficient caching and task unloading in unmanned aerial vehicle-assisted Internet of vehicles
CN112911618B (en) Unmanned aerial vehicle server task unloading scheduling method based on resource exit scene
Yu et al. UAV-assisted cooperative offloading energy efficiency system for mobile edge computing
CN116546559B (en) Distributed multi-target space-ground combined track planning and unloading scheduling method and system
Wang et al. Joint optimization of resource allocation and computation offloading based on game coalition in C-V2X
An et al. Air-ground integrated mobile edge computing in vehicular visual sensor networks
CN116723548A (en) Unmanned aerial vehicle auxiliary calculation unloading method based on deep reinforcement learning
CN114980205A (en) QoE (quality of experience) maximization method and device for multi-antenna unmanned aerial vehicle video transmission system
Zhang et al. On-Device Intelligence for 5G RAN: Knowledge Transfer and Federated Learning Enabled UE-Centric Traffic Steering
CN116017472B (en) Unmanned aerial vehicle track planning and resource allocation method for emergency network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination