CN110928188B - Air storage control method of air compressor - Google Patents

Air storage control method of air compressor Download PDF

Info

Publication number
CN110928188B
CN110928188B CN201911257000.9A CN201911257000A CN110928188B CN 110928188 B CN110928188 B CN 110928188B CN 201911257000 A CN201911257000 A CN 201911257000A CN 110928188 B CN110928188 B CN 110928188B
Authority
CN
China
Prior art keywords
air
storage tank
motor
neural network
air pressure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911257000.9A
Other languages
Chinese (zh)
Other versions
CN110928188A (en
Inventor
张奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Riley Equipment Energy Co ltd
Original Assignee
Jiangxi Laili Electric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Laili Electric Co ltd filed Critical Jiangxi Laili Electric Co ltd
Priority to CN201911257000.9A priority Critical patent/CN110928188B/en
Publication of CN110928188A publication Critical patent/CN110928188A/en
Application granted granted Critical
Publication of CN110928188B publication Critical patent/CN110928188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F04POSITIVE - DISPLACEMENT MACHINES FOR LIQUIDS; PUMPS FOR LIQUIDS OR ELASTIC FLUIDS
    • F04BPOSITIVE-DISPLACEMENT MACHINES FOR LIQUIDS; PUMPS
    • F04B49/00Control, e.g. of pump delivery, or pump pressure of, or safety measures for, machines, pumps, or pumping installations, not otherwise provided for, or of interest apart from, groups F04B1/00 - F04B47/00
    • F04B49/06Control using electricity

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Positive-Displacement Pumps (AREA)

Abstract

The invention discloses an air compressor control method based on a deep reinforcement learning curiosity algorithm, which is characterized in that the air compressor is controlled through the curiosity algorithm constructed based on DQN, on one hand, the process of exploration and purpose realization is not required to be separated, the time is saved, on the other hand, the air compressor is controlled through the deep reinforcement learning, so that the air pressure in an air storage tank can be maintained at a set value by using an operation strategy with the lowest energy consumption, and after the air compressor is operated for a period of time, the air compressor is prevented from being overhigh in air pressure by suddenly cutting off the power when the power of an air compressor motor is overhigh, the equipment maintenance is facilitated, and the service life of equipment is prolonged.

Description

Air storage control method of air compressor
Technical Field
The invention relates to the technical field of air compression, in particular to an air storage control method of an air compressor.
Background
Air compressor is common gas compression equipment, after air compression, compressed air will be carried in the gas holder to store usually, then take the compressed air in the gas holder when needs, however, compressed air gas storage process intelligent degree among the prior art is low, unable accurate control gas storage, on the one hand will artifical adjustment make the gas holder guarantee under the condition that does not exceed alert pressure that the gas holder is in the sufficient condition of gas storage volume, on the other hand leads to producing too high energy consumption among the gas compression process, and still need be through the mode of sending out the police dispatch newspaper then direct outage when gas compression reaches a certain index, the motor of air compressor is being in higher power when the outage of this mode, direct outage is unfavorable for equipment maintenance.
Disclosure of Invention
The invention aims to solve the technical problem of providing the air storage control method of the air compressor, which has high intelligent degree and low energy consumption, is beneficial to equipment maintenance and prolongs the service life of the equipment.
The technical scheme adopted by the invention is that the air storage control method of the air compressor comprises the following steps:
s1, collecting environmental parameters at each moment, and collecting total energy consumption W collected at the current moment t t Air compression motor power P t Air pressure N in the air storage tank t Air inlet flow D of air storage tank t As environmental parameter S acquired at the current time t t
S2, stopping the motor, starting the motor, keeping the current power of the motor, and taking five actions of up-regulating the power of the motor and down-regulating the power of the motor as executable actions;
s3, constructing a deep learning reinforcement ICM model, and determining a reward function and a loss function;
and S4, pre-training in a simulation environment, and then putting into an actual working scene.
The invention has the beneficial effects that:
(1) The ICM is used, so that the exploration process and the implementation process of the equipment are synchronously carried out, the equipment can complete the exploration of the whole unknown space by using the ICM in the process of selecting the optimal strategy, a random exploration process is not required to be additionally arranged, and time and labor are saved.
(2) The deep reinforcement learning is used, the relation among the total energy consumption, the power of an air compression motor, the air pressure in an air storage tank and the air flow of an air inlet of the air storage tank is modeled by utilizing the fitting function of a neural network, and the modeling of a human formula is not needed, so that the deep reinforcement learning is convenient and practical, the optimal strategy can be automatically collected and explored in the operation process, and the data cost is low.
As a priority, the air pressure N in the air tank t The system comprises n groups of air pressure data, wherein the n groups of air pressure data are obtained by n air pressure sensors which are uniformly distributed in an air storage tank;
the air inlet flow D of the air storage tank t The system comprises n groups of instantaneous flow data, wherein the n groups of instantaneous flow data are obtained by n flow sensors which are uniformly distributed at an air inlet of an air storage tank;
and a large number of data acquisition ends are used for collecting the same data, so that the whole information of the environment at the current moment can be completely and comprehensively collected.
The deep reinforcement learning ICM in step S3 is constructed based on DQN, and includes an environmental feature extraction neural network, a motion prediction neural network, a motion value neural network, and an environmental feature prediction neural network, and the ICM is constructed using DQN, so that a model can be optimized using an experience pool of DQN, so that a control program can be periodically optimized, and in a current optimization cycle, the working state of the control program can be better observed, and whether to perform next optimization is determined.
As a priority, the input parameter of the environment extraction neural network is S t The output being a characteristic representation of the environmental parameter
Figure BDA0002310549680000022
The input parameter of the action value neural network is
Figure BDA0002310549680000023
The output is the expected maximum cumulative reward Q corresponding to the five executable actions in the step S2 t
The input parameters of the action prediction neural network are
Figure BDA0002310549680000024
And
Figure BDA0002310549680000025
the output is pair Q t Predicted Q' t
The input parameter of the environment characteristic prediction neural network is Q t And S t Output as a pair
Figure BDA0002310549680000026
Making predictions
Figure BDA0002310549680000027
Due to Q' t Is to Q t Is predicted of (2), and Q' t Derived from
Figure BDA0002310549680000028
And
Figure BDA0002310549680000029
this gives rise to Q' t To Q t When the prediction is more and more accurate, explain
Figure BDA00023105496800000210
And
Figure BDA00023105496800000211
the higher the relevance to the action, the more the environmental noise that cannot be influenced by the action is filtered out when the environmental features are extracted.
As a priority, the reward function is
Figure BDA0002310549680000021
Wherein N is the air pressure in the air tank desired by the user, alpha, beta are influence coefficients larger than 0 determined according to the user's needs, mu is a positive integer as an amplification coefficient,
Figure BDA0002310549680000032
is the average of the n sets of air pressure data at time t,
Figure BDA0002310549680000033
on one hand, the control program can be ensured to actively explore a strategy for minimizing energy consumption at each moment to enable total energy consumption to rise slowly, and on the other hand, 1+ W is set t It is ensured that the control program does not initially choose to turn off the motor to bring the reward towards infinity,
Figure BDA0002310549680000034
when the average air pressure in the air storage tank deviates from the set value, the control program can be prompted to adjust the strategy to enable the pressure to return to the set value due to high negative reward
As a priority, the loss function in step S3 is
Figure BDA0002310549680000031
Wherein Q is t,a Expected maximum jackpot for actions performed at time t, Q t+1 In the optional action at the moment of max being t +1, the expected maximum jackpot with the largest value is obtained, and γ is an influence coefficient greater than 0 and smaller than 1.
Experience pool (S) with DQN t ,Q t ,r t ,S t+1 ) The loss function is optimized, so that the control program can gradually enhance the grasp of the environmental conditions on the one hand and can filter and selfMotion-independent ambient noise.
Preferably, the motor is provided with power which is adjusted up to the maximum power, the gas storage tank is provided with alarm stop air pressure, the alarm stop air pressure is larger than N, so that the intelligent control program does not cause danger due to unknown exploration in the exploration process, and after multiple times of training, the intelligent control program can grasp the situation, and the external reward of the situation is lower than the reward maintained near the set value, so that the situation is not explored any more. The motor is provided with the highest power which is adjusted upwards, so that the motor can be prevented from causing danger due to overhigh power caused by exploration environment
Detailed Description
The invention discloses a gas storage control method of an air compressor, which comprises the following steps:
s1, collecting environmental parameters at each moment, and collecting total energy consumption W collected at the current moment t t Electric power P of air compression motor t Air pressure N in the air storage tank t And the air flow D of the air inlet of the air storage tank t As environmental parameter S acquired at the present time t t
S2, stopping the motor, starting the motor, keeping the current power of the motor, and taking five actions of up-regulating the power of the motor and down-regulating the power of the motor as executable actions;
s3, constructing a deep learning reinforcement ICM model, and determining a reward function and a loss function;
and S4, pre-training in a simulation environment, and then putting into an actual working scene.
Adding air flow D of air inlet of air storage tank into input parameters t The flow data entering the air storage tank at each moment can be better identified, the influence of the air flow formed under different motor powers on the pressure in the air storage tank can be better expected in advance by a control program, and the pressure change in the air storage tank can be better controlled by the control program.
Air pressure N in the air tank t The system comprises n groups of air pressure data, wherein the n groups of air pressure data are obtained by n air pressure sensors which are uniformly distributed in an air storage tank;
the air inlet flow D of the air storage tank t The system comprises n groups of instantaneous flow data, wherein the n groups of instantaneous flow data are obtained by n flow sensors which are uniformly distributed at an air inlet of an air storage tank;
and a large number of data acquisition ends are used for collecting the same data, so that the whole information of the environment at the current moment can be completely and comprehensively collected.
The deep reinforcement learning ICM in step S3 is constructed based on DQN, and includes an environmental feature extraction neural network, a motion prediction neural network, a motion value neural network, and an environmental feature prediction neural network, and the ICM is constructed using DQN, so that a model can be optimized using an experience pool of DQN, so that a control program can be periodically optimized, and in a current optimization cycle, the working state of the control program can be better observed, and whether to perform next optimization is determined.
The input parameter of the environment extraction neural network is S t The output is a characteristic representation of the environmental parameter
Figure BDA0002310549680000042
The input parameter of the action value neural network is
Figure BDA0002310549680000043
The output is the expected maximum cumulative reward Q corresponding to the five executable actions in the step S2 t
The input parameters of the action prediction neural network are
Figure BDA0002310549680000044
And
Figure BDA0002310549680000045
the output is pair Q t Predicted Q' t
The input parameter of the environment characteristic prediction neural network is Q t And S t Output as a pair
Figure BDA0002310549680000046
Making predictions
Figure BDA0002310549680000047
Due to Q' t Is to Q t Is predicted of (2), and Q' t Derived from
Figure BDA0002310549680000048
And
Figure BDA0002310549680000049
this gives rise to Q' t To Q t When the prediction becomes more and more accurate, explain
Figure BDA00023105496800000410
And
Figure BDA00023105496800000411
the higher the relevance to the action, the more the environmental noise that cannot be influenced by the action is filtered when the environmental features are extracted.
The reward function is
Figure BDA0002310549680000041
Wherein N is the air pressure in the air tank desired by the user, alpha, beta are influence coefficients larger than 0 determined according to the user's needs, mu is a positive integer as an amplification coefficient,
Figure BDA0002310549680000052
is the average air pressure within the air reservoir,
Figure BDA0002310549680000053
on one hand, the control program can be ensured to actively explore a strategy for minimizing energy consumption at each moment to enable total energy consumption to rise slowly, and on the other hand, 1+ W is set t Ensures that the control program does not choose to turn off the motor at the beginning to lead the reward to be infinite
Figure BDA0002310549680000054
When the average air pressure in the air storage tank deviates from the set value, the control program can be prompted to adjust the strategy to enable the pressure to return to the set value due to high negative reward
The loss function in step S3 is
Figure BDA0002310549680000051
Wherein Q is t,a Expected maximum jackpot for actions performed at time t, Q t+1 In the optional action at the moment of max being t +1, the expected maximum jackpot with the largest value is obtained, and γ is an influence coefficient greater than 0 and smaller than 1.
(r+Q t+1 max-Q t,a ) 2 Is a general formula used in the DQN algorithm to get an accurate expected maximum jackpot,
Figure BDA0002310549680000055
representing the unknown degree of the deep reinforcement model to the environment, when the loss function is reduced by optimizing the loss function, the unknown degree will be gradually reduced,
Figure BDA0002310549680000056
representing the prediction of motion, when the loss function is optimized to be reduced, Q 'is extracted' t Is/are as follows
Figure BDA0002310549680000057
And
Figure BDA0002310549680000058
in relation to the environmental characteristics that can be influenced by the action, i.e. the relation between the motor power and the pressure in the reservoir.
Experience pool (S) with DQN t ,Q t ,r t ,S t+1 ) The loss function is optimized, so that the control program can gradually enhance the grasp of the environmental condition on the one hand and can filter the phenomenon which is not related to the self action on the other handThe experience pool is recorded environmental parameters, output action value and reward brought by action while the control program controls the operation of the work.
The motor is provided with power up-regulated to the highest power, the air storage tank is provided with alarm stop air pressure, the alarm stop air pressure is larger than N, so that danger caused by unknown exploration in the exploration process of the intelligent control program can be avoided, meanwhile, after multiple times of training, the intelligent control program can master the situation, the external reward of the situation is lower than the reward maintained near the set value, and the situation is not explored any more. The motor is provided with the maximum power which can be adjusted upwards, so that the motor can be prevented from danger caused by overhigh power due to exploration environment
The invention has the following beneficial effects:
(1) The ICM is used, so that the exploration process and the implementation process of the equipment are synchronously carried out, the equipment can complete the exploration of the whole unknown space by using the ICM in the process of selecting the optimal strategy, a random exploration process is not required to be additionally arranged, and time and labor are saved.
(2) The deep reinforcement learning is used, the relation among the total energy consumption, the power of an air compression motor, the air pressure in an air storage tank and the air flow of an air inlet of the air storage tank is modeled by utilizing the fitting function of a neural network, and the modeling of a human formula is not needed, so that the deep reinforcement learning is convenient and practical, the optimal strategy can be automatically collected and explored in the operation process, and the data cost is low.
(3) The prediction of motion in the ICM algorithm enables the device to extract the actually affected environmental features from the large amount of noise, reducing the effect of the noise to some extent.
(4) When the pressure in the air storage tank is far larger than a set value, the negative reward brought by the pressure can be used as a main factor to cover the reward brought by exploring unknown environment and reducing energy consumption, the gravity center of a control program can be placed on increasing the storage amount of compressed air in the air storage tank and increasing the pressure in the air storage tank, when the pressure in the air storage pipe is close to the set value, the negative reward brought by the pressure can approach to 0, the energy consumption and the reward brought by exploring the unknown environment can be used as main consideration factors, the control program can place the gravity center on exploring the unknown environment and reducing the motor power at each moment, after the condition of the unknown environment is completely mastered, the reward brought by the unknown environment also approaches to 0, the control program can actively adjust a strategy to ensure the lowest energy consumption in order to obtain the maximum accumulated reward, so that the motor can automatically adjust the operation according to the pressure change of the air storage tank, the air storage amount in the air storage tank is ensured to be certain, and the total energy consumption is minimized.
(5) When the pressure in the gas storage tank is about to rise to a set value, in order to keep energy consumption, the control program can allocate the power of the motor to the minimum limit just enough to enable the pressure in the gas storage tank to rise slowly, after certain training, the motor can be automatically turned off when the pressure reaches the set value, so that the accumulated reward is the maximum, the effect that the motor cannot be suddenly powered off due to the fact that the pressure of the gas storage tank reaches the warning value under high power is achieved, and the service life of equipment is prolonged.

Claims (5)

1. An air storage control method of an air compressor is characterized by comprising the following steps:
s1, collecting environmental parameters at each moment, and collecting total energy consumption W collected at the current moment t t Electric power P of air compression motor t Air pressure N in air storage tank t Air inlet flow D of air storage tank t As environmental parameter S acquired at the present time t t
S2, stopping the motor, starting the motor, keeping the current power of the motor, and taking five actions of up-regulating the power of the motor and down-regulating the power of the motor as executable actions;
s3, constructing a deep reinforcement learning ICM model, and determining a reward function and a loss function, wherein the deep reinforcement learning ICM is constructed on the basis of DQN and comprises an environmental characteristic extraction neural network, an action prediction neural network, an action value neural network and an environmental characteristic prediction neural network;
the input parameter of the environment characteristic extraction neural network is S t The output being a characteristic representation of the environmental parameter
Figure FDA0003602208470000011
The input parameter of the action value neural network is
Figure FDA0003602208470000012
The output is the expected maximum cumulative reward Q corresponding to the five executable actions in the step S2 t
The input parameters of the action prediction neural network are
Figure FDA0003602208470000013
And
Figure FDA0003602208470000014
the output is pair Q t Predicted Q' t
The input parameter of the environment characteristic prediction neural network is Q t And S t Output as a pair
Figure FDA0003602208470000015
Making predictions
Figure FDA0003602208470000016
And S4, pre-training in a simulation environment, and then putting into an actual working scene.
2. The air storage control method of air compressor as claimed in claim 1, wherein the air pressure N in said air storage tank is t The system comprises n groups of air pressure data, wherein the n groups of air pressure data are obtained by n air pressure sensors which are uniformly distributed in an air storage tank;
the air inlet flow D of the air storage tank t The device comprises n groups of gas flow data, wherein the n groups of gas flow data are obtained by n flow sensors which are uniformly distributed at the gas inlet of a gas storage tank.
3. The air storage control method of claim 1, wherein the reward function is
Figure FDA0003602208470000017
Wherein N is the air pressure in the air storage tank expected by the user, alpha and beta are influence coefficients which are determined according to the requirement of the user and are larger than 0, mu is a positive integer which is used as an amplification coefficient,
Figure FDA0003602208470000021
is the average of n sets of air pressure data at time t.
4. The method as claimed in claim 3, wherein the loss function in step S3 is
Figure FDA0003602208470000022
Wherein Q is t,a Expected maximum jackpot for actions performed at time t, Q t+1 In the optional action at the moment of max +1, the expected maximum jackpot with the largest value is obtained, and γ is an influence coefficient greater than 0 and smaller than 1.
5. The method as claimed in claim 1, wherein the motor is configured to have a maximum power, and the air tank is configured to have a stop air pressure.
CN201911257000.9A 2019-12-10 2019-12-10 Air storage control method of air compressor Active CN110928188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911257000.9A CN110928188B (en) 2019-12-10 2019-12-10 Air storage control method of air compressor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911257000.9A CN110928188B (en) 2019-12-10 2019-12-10 Air storage control method of air compressor

Publications (2)

Publication Number Publication Date
CN110928188A CN110928188A (en) 2020-03-27
CN110928188B true CN110928188B (en) 2022-10-28

Family

ID=69859400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911257000.9A Active CN110928188B (en) 2019-12-10 2019-12-10 Air storage control method of air compressor

Country Status (1)

Country Link
CN (1) CN110928188B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111664079B (en) * 2020-05-07 2022-04-08 中国联合网络通信集团有限公司 Control method and device of air compressor
CN112817240B (en) * 2020-12-30 2022-03-22 西安交通大学 Centrifugal compressor regulating and controlling method based on deep reinforcement learning algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635684A (en) * 2014-12-25 2015-05-20 冶金自动化研究设计院 Cluster optimization control system for air compressor
KR101816432B1 (en) * 2016-08-26 2018-01-08 현대자동차주식회사 Method for controlling Air-conditioner compressor
CN108960487A (en) * 2018-06-13 2018-12-07 北京天泽智云科技有限公司 Air compressor machine group system energy consumption optimization method and device based on big data analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635684A (en) * 2014-12-25 2015-05-20 冶金自动化研究设计院 Cluster optimization control system for air compressor
KR101816432B1 (en) * 2016-08-26 2018-01-08 현대자동차주식회사 Method for controlling Air-conditioner compressor
CN108960487A (en) * 2018-06-13 2018-12-07 北京天泽智云科技有限公司 Air compressor machine group system energy consumption optimization method and device based on big data analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
国美娜.大型乙烯压缩机控制系统设计.《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅰ辑》.2016,(第3期),第B016-63页. *

Also Published As

Publication number Publication date
CN110928188A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110928188B (en) Air storage control method of air compressor
CN108488987B (en) Control method of air conditioning apparatus, storage medium, and apparatus
CN106765959A (en) Heat-air conditioner energy-saving control method based on genetic algorithm and depth B P neural network algorithms
CN108767866B (en) Energy management method, device and system
CN110007613B (en) Warming prediction method and system for heat storage type electric heater and storage medium
CN109269036B (en) Cloud control method of multi-split air conditioner and multi-split air conditioner system
CN110198042B (en) Dynamic optimization method for power grid energy storage and storage medium
CN105278353A (en) Method and system for acquiring data intelligently and data processing device
CN112413831A (en) Energy-saving control system and method for central air conditioner
CN109429415B (en) Illumination control method, device and system
KR102174884B1 (en) Automatic aquarium control apparatus and method based on growth environment
CN112149905A (en) Photovoltaic power station short-term power prediction method based on wavelet transformation and wavelet neural network
CN113887797A (en) Building electric heating load prediction model establishing method, device and equipment
WO2023093388A1 (en) Air purifier adjusting method based on reinforcement learning model, and air purifier
CN110985346B (en) After-cooling control method for air compressor
CN116360331A (en) Universal automatic control system and control method
CN107979900A (en) A kind of road lamp intelligence energy saving method based on random forest regression forecasting algorithm
CN106227127A (en) Generating equipment intelligent monitoring and controlling device and monitoring method
CN111179108A (en) Method and device for predicting power consumption
CN108233357B (en) Wind power day-ahead absorption optimization method based on nonparametric probability prediction and risk expectation
JP2001022437A (en) Plant controller and computer readable recording medium storing plant control program
CN104938309B (en) Solar battery inductor
CN111998505A (en) Energy consumption optimization method and system for air conditioning system in general park based on RSM-Kriging-GA algorithm
CN110991519A (en) Intelligent switch state analysis and adjustment method and system
KR102481577B1 (en) Control method and system for efficient energy consumption and fish growth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221109

Address after: Area B, Shengfang Industrial Park, Lianhua County, Pingxiang City, Jiangxi Province, 337199

Patentee after: Jiangxi Riley Equipment Energy Co.,Ltd.

Address before: 337100 B District, Lianhua County Industrial Park, Pingxiang, Jiangxi

Patentee before: Jiangxi Laili Electric Co.,Ltd.