CN115086187A - Power communication channel planning method and device based on reinforcement learning and storage medium - Google Patents

Power communication channel planning method and device based on reinforcement learning and storage medium Download PDF

Info

Publication number
CN115086187A
CN115086187A CN202210918026.9A CN202210918026A CN115086187A CN 115086187 A CN115086187 A CN 115086187A CN 202210918026 A CN202210918026 A CN 202210918026A CN 115086187 A CN115086187 A CN 115086187A
Authority
CN
China
Prior art keywords
communication channel
reinforcement learning
maximum
planning
power communication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210918026.9A
Other languages
Chinese (zh)
Other versions
CN115086187B (en
Inventor
李溢杰
梁文娟
张正峰
卢建刚
梁宇图
邓晓智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Electric Power Dispatch Control Center of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202210918026.9A priority Critical patent/CN115086187B/en
Publication of CN115086187A publication Critical patent/CN115086187A/en
Application granted granted Critical
Publication of CN115086187B publication Critical patent/CN115086187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • G06Q50/40
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/08Learning-based routing, e.g. using neural networks or artificial intelligence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a power communication channel planning method and device based on reinforcement learning and a storage medium. The method comprises the following steps: acquiring parameters of a starting station, an ending station and a communication channel; and inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning, and outputting an optimal communication channel. The invention improves the efficiency of planning the electric power communication channel for bearing the stability control service.

Description

Power communication channel planning method and device based on reinforcement learning and storage medium
Technical Field
The invention relates to the technical field of power communication channel planning, in particular to a power communication channel planning method and device based on reinforcement learning and a storage medium.
Background
And the electric power communication scheduling personnel plan the electric power communication network routing circuit bearing the stability control service in a manual mode. The routing route planning is carried out manually, and once the network scale is overlarge, the planning takes long time due to the large network complexity; in addition, after the number of routing nodes exceeds dozens, the manual work may not exhaust the whole number of paths, and the optimal path cannot be selected.
Disclosure of Invention
The invention provides a power communication channel planning method, a power communication channel planning device and a power communication channel planning storage medium based on reinforcement learning, and the efficiency of power communication channel planning for bearing stability control services is improved.
An embodiment of the invention provides a power communication channel planning method based on reinforcement learning, which comprises the following steps:
acquiring parameters of a starting station, an ending station and a communication channel;
and inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning, and outputting an optimal communication channel.
Further, the communication channel parameters include a maximum channel number, a port type, a bandwidth, a network type, a maximum circuit number of a transmission segment, a maximum network element number, a maximum kilometer length, a routing mode, whether SNCP is configured, a reserved fiber core number, and a maximum attenuation.
Further, the communication channel prediction model based on deep reinforcement learning is established according to the following models:
Q(s,c)=Q(s,c)+c[Re+Re·maxc·Q(s’,c’)-Q(s,c)]
q represents a reinforcement learning model, s represents a current state, c represents input data corresponding to the current state, s 'represents a next state, c' represents input data corresponding to the next state, and Re represents a reward value.
Further, the communication channel prediction model based on deep reinforcement learning is trained according to the following steps:
step 1: initializing a Q value table, a learning rate, a discount factor and an exploration rate;
step 2: randomly selecting a group of training data from a training set as an initial state s to be input into the deep reinforcement learning-based communication channel prediction model;
and step 3: judging whether the current step number is larger than the total step number; if not, acquiring a random number num between 0 and 1; if yes, go to step 7;
and 4, step 4: judging whether the random number num is greater than the exploration speed alpha or not; if so, selecting the action corresponding to the maximum Q value of the current state; if not, randomly selecting an action;
and 5: executing the action selected in the step 4 to obtain the next state s' and reward of the model, and updating the Q value table;
step 6: setting s' to a current state; judging whether s' is in a final state, if so, entering the next step; if not, turning to the step 3;
and 7: updating the exploration rate alpha;
and 8: judging whether the current learning times are larger than the total learning times or not; if yes, ending the training; if not, go to step 2.
Another embodiment of the invention provides a power communication channel planning device for reinforcement learning, which comprises a planning data acquisition module and a communication channel planning module;
the planning data acquisition module is used for acquiring parameters of a starting station, an ending station and a communication channel;
and the communication channel planning module is used for inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning and outputting an optimal communication channel.
Another embodiment of the present invention provides a readable storage medium, where the readable storage medium includes a stored computer program, and when the computer program is executed, the computer program controls a device where the readable storage medium is located to execute the reinforcement learning power communication channel planning method according to any one of the method embodiments of the present invention.
The embodiment of the invention has the following beneficial effects:
the invention provides a power communication channel planning method, a device and a storage medium based on reinforcement learning. The invention realizes automatic planning of the electric power communication channel for bearing the stability control service and improves the efficiency of the electric power communication channel planning.
Drawings
Fig. 1 is a schematic flowchart of a power communication channel planning method based on reinforcement learning according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electric power communication channel planning apparatus based on reinforcement learning according to an embodiment of the present invention.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the invention provides a method for planning a power communication channel by reinforcement learning, which includes the following steps:
step S101: and acquiring parameters of a starting station, an ending station and a communication channel.
As an embodiment, the communication channel parameters include a maximum channel number, a port type, a bandwidth, a network type, a maximum circuit number of a transmission segment, a maximum network element number, a maximum kilometer length, a routing mode, whether SNCP is configured, a reserved fiber core number, and a maximum attenuation. The routing mode is direct or indirect.
Step S102: and inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning, and outputting an optimal communication channel.
As an embodiment, the communication channel prediction model based on deep reinforcement learning is built according to the following model:
Q(s,c)=Q(s,c)+c[Re+Re·max c ·Q(s’,c’)-Q(s,c)]
q represents a reinforcement learning model, s represents a current state, c represents input data corresponding to the current state, s 'represents a next state, c' represents input data corresponding to the next state, and Re represents a reward value.
As an embodiment, the deep reinforcement learning-based communication channel prediction model is trained according to the following steps:
step S1021: initializing a Q value table, a learning rate, a discount factor and an exploration rate;
step S1022: randomly selecting a group of training data from a training set as an initial state s to be input into the deep reinforcement learning-based communication channel prediction model; and the training set is constructed according to historical data.
Step S1023: judging whether the current step number is larger than the total step number; if not, acquiring a random number num between 0 and 1; if yes, go to step S1027;
step S1024: judging whether the random number num is greater than the exploration speed alpha or not; if so, selecting an action a corresponding to the maximum Q value of the current state; if not, randomly selecting an action a;
step S1025: executing the action a to obtain the next state s' and the reward of the model, and updating the Q value table;
step S1026: setting s 'as a current state, judging whether s' is in a final state, and if so, entering the next step; if not, go to step S1023;
step S1027: updating the exploration rate;
step S1028: judging whether the current learning times are larger than the total learning times or not; if yes, ending the training; if not, go to step S1022.
Compared with the manual mode in the prior art, the invention has the following advantages:
1. the method adopts specific quantitative indexes to measure the advantages and the disadvantages of the corresponding channels, and is more accurate compared with the traditional manual mode; the invention realizes the automatic planning of the electric power communication channel for bearing the stability control service.
2. The invention can increase or delete with the change of the service and the network through the specific quantitative index, and the model can be applicable even if the complexity of the network is increased and the described fields are increased in the future. I.e. by adding a field (or variable) describing the state to the model state S.
On the basis of the above embodiment of the invention, the present invention correspondingly provides an embodiment of the apparatus, as shown in fig. 2;
the invention provides a power communication channel planning device based on reinforcement learning, which comprises a planning data acquisition module and a communication channel planning module;
the planning data acquisition module is used for acquiring parameters of a starting station, an ending station and a communication channel;
and the communication channel planning module is used for inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning and outputting an optimal communication channel.
For convenience and brevity of description, the embodiments of the apparatus according to the present invention include all the embodiments of the power communication channel planning method for reinforcement learning described above, and are not described herein again.
On the basis of the embodiment of the invention, the invention correspondingly provides an embodiment of a readable storage medium; another embodiment of the present invention provides a readable storage medium, which includes a stored computer program, and when the computer program is executed, the computer program controls a device on which the readable storage medium is located to execute the reinforcement learning power communication channel planning method according to any one of the method embodiments of the present invention.
Illustratively, the computer program may be partitioned into one or more modules that are stored in the memory and executed by the processor to implement the invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal device and connects the various parts of the whole terminal device using various interfaces and lines.
The memory may be used for storing the computer programs and/or modules, and the processor may implement various functions of the terminal device by executing or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The module/unit integrated with the terminal device may be stored in a computer-readable storage medium (i.e., the above-mentioned readable storage medium) if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments described above may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Claims (7)

1. A power communication channel planning method based on reinforcement learning is characterized by comprising the following steps:
acquiring parameters of a starting station, an ending station and a communication channel;
and inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning, and outputting an optimal communication channel.
2. The method of claim 1, wherein the communication channel parameters comprise a maximum number of channels, a port type, a bandwidth, a network type, a maximum number of circuits for transmission segments, a maximum number of network elements, a maximum kilometer length, a routing mode, whether SNCP is configured, a reserved core number, and a maximum attenuation.
3. The reinforcement learning-based power communication channel planning method according to claim 2, wherein the deep reinforcement learning-based communication channel prediction model is established according to the following model:
Q(s,c)=Q(s,c)+c[Re+Re·max c ·Q(s’,c’)-Q(s,c)]
q represents a reinforcement learning model, s represents a current state, c represents input data corresponding to the current state, s 'represents a next state, c' represents input data corresponding to the next state, and Re represents a reward value.
4. The reinforcement learning-based power communication channel planning method according to any one of claims 1 to 3, wherein the deep reinforcement learning-based communication channel prediction model is trained according to the following steps:
step 1: initializing a Q value table, a learning rate, a discount factor and an exploration rate;
step 2: randomly selecting a group of training data from a training set as an initial state s to be input into the deep reinforcement learning-based communication channel prediction model;
and step 3: judging whether the current step number is larger than the total step number; if not, acquiring a random number num between 0 and 1; if yes, go to step 7;
and 4, step 4: judging whether the random number num is greater than the exploration speed alpha or not; if so, selecting the action corresponding to the maximum Q value of the current state; if not, randomly selecting an action;
and 5: executing the action selected in the step 4 to obtain the next state s' and reward of the model, and updating the Q value table;
and 6: setting s' to a current state; judging whether s' is in a final state, if so, entering the next step; if not, turning to the step 3;
and 7: updating the exploration rate alpha;
and 8: judging whether the current learning times are larger than the total learning times or not; if yes, ending the training; if not, go to step 2.
5. A power communication channel planning device based on reinforcement learning is characterized by comprising a planning data acquisition module and a communication channel planning module;
the planning data acquisition module is used for acquiring parameters of a starting station, an ending station and a communication channel;
and the communication channel planning module is used for inputting the parameters of the starting station, the ending station and the communication channel into a communication channel prediction model based on deep reinforcement learning and outputting an optimal communication channel.
6. The reinforcement learning-based power communication channel planning device according to claim 5, wherein the communication channel parameters in the planning data obtaining module include a maximum channel number, a port type, a bandwidth, a network type, a maximum circuit number of transmission segments, a maximum network element number, a maximum kilometer length, a routing mode, whether to configure SNCP, a reserved fiber core number, and a maximum attenuation.
7. A readable storage medium comprising a stored computer program which, when executed, controls an apparatus on which the readable storage medium is located to perform the reinforcement learning-based power communication channel planning method according to any one of claims 1 to 4.
CN202210918026.9A 2022-08-01 2022-08-01 Electric power communication channel planning method, device and storage medium based on reinforcement learning Active CN115086187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210918026.9A CN115086187B (en) 2022-08-01 2022-08-01 Electric power communication channel planning method, device and storage medium based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210918026.9A CN115086187B (en) 2022-08-01 2022-08-01 Electric power communication channel planning method, device and storage medium based on reinforcement learning

Publications (2)

Publication Number Publication Date
CN115086187A true CN115086187A (en) 2022-09-20
CN115086187B CN115086187B (en) 2023-09-05

Family

ID=83242837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210918026.9A Active CN115086187B (en) 2022-08-01 2022-08-01 Electric power communication channel planning method, device and storage medium based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN115086187B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106230716A (en) * 2016-07-22 2016-12-14 江苏省电力公司信息通信分公司 A kind of ant group algorithm and power telecom network communication service intelligent allocation method
CN110601973A (en) * 2019-08-26 2019-12-20 中移(杭州)信息技术有限公司 Route planning method, system, server and storage medium
CN111010294A (en) * 2019-11-28 2020-04-14 国网甘肃省电力公司电力科学研究院 Electric power communication network routing method based on deep reinforcement learning
CN111191918A (en) * 2019-12-27 2020-05-22 国网江苏省电力有限公司信息通信分公司 Service route planning method and device for smart power grid communication network
WO2021135449A1 (en) * 2020-06-30 2021-07-08 平安科技(深圳)有限公司 Deep reinforcement learning-based data classification method, apparatus, device, and medium
CN113095578A (en) * 2021-04-16 2021-07-09 广东电网有限责任公司电力调度控制中心 Design method, device, terminal and medium for optimal communication path of transformer substation
CN114025264A (en) * 2021-11-15 2022-02-08 国网天津市电力公司信息通信公司 Routing planning method for power communication SDH optical transmission network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106230716A (en) * 2016-07-22 2016-12-14 江苏省电力公司信息通信分公司 A kind of ant group algorithm and power telecom network communication service intelligent allocation method
CN110601973A (en) * 2019-08-26 2019-12-20 中移(杭州)信息技术有限公司 Route planning method, system, server and storage medium
CN111010294A (en) * 2019-11-28 2020-04-14 国网甘肃省电力公司电力科学研究院 Electric power communication network routing method based on deep reinforcement learning
CN111191918A (en) * 2019-12-27 2020-05-22 国网江苏省电力有限公司信息通信分公司 Service route planning method and device for smart power grid communication network
WO2021135449A1 (en) * 2020-06-30 2021-07-08 平安科技(深圳)有限公司 Deep reinforcement learning-based data classification method, apparatus, device, and medium
CN113095578A (en) * 2021-04-16 2021-07-09 广东电网有限责任公司电力调度控制中心 Design method, device, terminal and medium for optimal communication path of transformer substation
CN114025264A (en) * 2021-11-15 2022-02-08 国网天津市电力公司信息通信公司 Routing planning method for power communication SDH optical transmission network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
皮特潘;DQN备忘, pages 1 *

Also Published As

Publication number Publication date
CN115086187B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN108536650B (en) Method and device for generating gradient lifting tree model
US20190220316A1 (en) Method, device and computer program product for determining resource amount for dedicated processing resources
CN111209931A (en) Data processing method, platform, terminal device and storage medium
CN109697083B (en) Fixed-point acceleration method and device for data, electronic equipment and storage medium
CN108037976A (en) Intelligent Matching method, medium and the equipment of a kind of portal template
CN117061365B (en) Node selection method, device, equipment and readable storage medium
CN107885716A (en) Text recognition method and device
CN110991088B (en) Cable model construction method, system, terminal equipment and storage medium
CN115086187B (en) Electric power communication channel planning method, device and storage medium based on reinforcement learning
CN113327576A (en) Speech synthesis method, apparatus, device and storage medium
CN112652281A (en) Music chord identification method and device, electronic equipment and storage medium
CN108834161B (en) Voice optimization method and device for micro base station, computer storage medium and equipment
US20200279152A1 (en) Lexicographic deep reinforcement learning using state constraints and conditional policies
CN115633083A (en) Power communication network service arrangement method, device and storage medium
CN109522326B (en) Data distribution method, device, equipment and storage medium
CN111259213B (en) Data visualization processing method and device
TWI734151B (en) Parameter synchronization method, device, and storage medium
CN111027196A (en) Simulation analysis task processing method and device for power equipment and storage medium
CN113850390A (en) Method, device, equipment and medium for sharing data in federal learning system
CN110968397B (en) Analysis method and device for virtual machine capacity management
CN113220501A (en) Method, apparatus and computer program product for data backup
CN113115337B (en) MIMO-based 5G multimode terminal transmission control method and device
US11061653B2 (en) Dynamic compiling for conditional statements during execution
CN114217617B (en) Robot control method and device
CN112711545B (en) Data access method based on array linked list type queue structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant