CN113552799A - Control valve viscosity parameter estimation method based on deep Q learning - Google Patents

Control valve viscosity parameter estimation method based on deep Q learning Download PDF

Info

Publication number
CN113552799A
CN113552799A CN202110721418.1A CN202110721418A CN113552799A CN 113552799 A CN113552799 A CN 113552799A CN 202110721418 A CN202110721418 A CN 202110721418A CN 113552799 A CN113552799 A CN 113552799A
Authority
CN
China
Prior art keywords
control valve
learning
value
stic
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110721418.1A
Other languages
Chinese (zh)
Inventor
张辉
张思龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202110721418.1A priority Critical patent/CN113552799A/en
Publication of CN113552799A publication Critical patent/CN113552799A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides a control valve viscosity parameter estimation method based on Deep Q-learning (Deep Q-learning), which is based on a nonlinear model of a control valve, continuously inputs a control command to the valve through a valve controller, simultaneously collects corresponding output of the valve, takes an input signal of the current control valve as a state, changes of an estimation value of a viscosity characteristic parameter of the nonlinear model of the control valve as an action, takes a valve rod position of the control valve as a reward value, adopts a Deep Q learning algorithm, continuously adjusts the action according to fluctuation of the state and the reward, and finally outputs the estimation value of the viscosity characteristic parameter of the nonlinear model of the control valve.

Description

Control valve viscosity parameter estimation method based on deep Q learning
Technical Field
The invention relates to the technical field of valve control, in particular to a viscous characteristic parameter estimation method of a nonlinear model of a control valve.
Background
Control valves are an important component of automatic control systems and play an increasingly important role in industrial production. One of the control valve abnormalities/failures is a complex nonlinear phenomenon, which is due to the dynamic nonlinear characteristic caused by the valve jamming and sticking because the control valve is often in a high-temperature environment or flows through a viscous medium all year round, and the characteristic is easy to cause the oscillation of the controlled variable of the loop, thereby obviously influencing the control performance of the loop. Statistically, the sticking characteristic of the pneumatic actuator valve is one of the causes of control loop oscillations. Thus, research into the viscous behavior of pneumatically actuated valves in the field of process monitoring is becoming a focus.
Identification of the sticking parameter S, J of the nonlinear model of the pneumatically actuated valve may provide system operators and equipment maintenance personnel with information on the characteristics of the pneumatically actuated valve, assist these personnel in understanding the operational state of the process, and make appropriate remedial actions to eliminate improper operation of the process control system. Therefore, the identification of the viscosity parameter is an indispensable link in the control process.
The viscous parameter identification algorithm of the existing nonlinear model of the pneumatic executive valve has the main problems that: the calculation amount is large, the time consumption is long, and the identification precision is influenced by the noise interference of the process. Therefore, how to improve the accuracy of the identification result and shorten the calculation time is an urgent problem to be solved in the pneumatic execution of the valve sticking parameter identification algorithm.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a control valve viscosity parameter estimation method based on Deep Q-learning (Deep Q-learning), which is based on a nonlinear model of a control valve, continuously inputs a control command to the valve through a valve controller, simultaneously collects the corresponding output of the valve, takes the input signal of the current control valve as a state, changes of the estimated value of the viscosity characteristic parameter of the nonlinear model of the control valve as an action, takes the position of a valve rod of the control valve as a reward value, adopts a Deep Q learning algorithm, continuously adjusts the action according to the fluctuation of the state and the reward, and finally outputs the estimated value of the viscosity characteristic parameter of the nonlinear model of the control valve.
The technical scheme of the invention is as follows:
a control valve viscosity parameter estimation method based on deep Q learning comprises the following steps:
s1, completing the connection and communication building work of the control valve, the control valve controller, the signal collector and the computer;
s2, building a deep Q learning algorithm logic framework and determining related parameters;
s3 runs an algorithm to obtain an estimate of the viscous characteristic parameter.
Preferably, the computer in S1 is configured to run a viscous parameter estimation algorithm for the control valve and send control data to the controller; the controller is used for controlling the action of the control valve; and the signal collector collects the input signal of the control valve and the output signal of the position of the valve rod.
Preferably, the S2 specifically includes the following steps:
s2.1 establishing a control valve nonlinear model containing viscosity characteristic parameters of the control valve as follows
uv(k)=Fstic(u(k),L,u(0),uv(k-1),L,uv(0),Jstic,Sstic)
Wherein, JsticAnd SsticFor viscous characteristic parameters, u (k) is the input signal to the control valve, uv(k) Outputting signals for controlling the position of the valve stem, FsticIs viscosity;
s2.2 setting a state space S in Q-learning, which contains the input signals u (k) of the control valves, and an action space A for describing the state transitions, which contains the incremental action, i.e. the viscous characteristic parameter JsticAnd SsticThe positive adjustment or the negative adjustment in the parameter estimation process of (1) is expressed as follows:
p=[p-μ,p+μ]
wherein, p is the parameter to be estimated, mu is the action step length, and the increment action is 2 in total, namely, mu is increased or reduced for the identification value of the current viscous characteristic parameter;
s2.3, determining a reward function r, wherein the reward function r is the difference between the current valve rod position and the target valve rod position and is recorded as:
r=uv(k)-uv *(k)
wherein u isv *(k) Indicating a target valve stem position;
s2.4, building a neural network framework, and calculating a value function Q (S, a) in the current parameter estimation process, wherein S and a are current state variables and actions.
Preferably, the neural network in S2.4 is constructed as a back propagation neural network (BP), a Recurrent Neural Network (RNN), or a long short term memory neural network (LSTMRNN).
Preferably, the S3 specifically includes the following steps:
s3.1 initializing an experience pool space D and a viscous characteristic parameter JsticAnd SsticThe experience pool space D is a database for storing all acquired data, can acquire an infinite number of data samples and simultaneously trains data;
s3.2 entering Q-learning framework, starting the first round of iterative loop, first obtaining u (k), uv(k) The observed value of (a);
s3.3, selecting an absolute action mode, namely increasing mu or reducing mu for the identification value of the current viscous characteristic parameter;
s3.4, selecting a reward mode, namely determining a reward function;
s3.5 coupling of St,at,rt,st+1Storing and converting in an experience pool space D, and storing state values s at the time t and the time t +1tAnd st+1Storing and converting in an experience pool space D;
s3.6, randomly sampling small batches in the experience pool space D;
s3.7 evaluating the cost function Q (S, a) in the current parameter estimation process by adopting the neural network in S2.4 and recording as Qest(s,a);
S3.8 setting target action value function Qtarget
S3.9, if the reward value is less than 20, directly quitting the current round, and if the reward value is more than or equal to 20, returning to S3.3;
and S3.10, after the current iteration is finished, recording the operation result to obtain the estimated value of the viscosity characteristic parameter.
Preferably, the basic process of the Q-learning framework in S3.2 is that the signal collector continuously collects input and output parameters of the control valve, the algorithm obtains a state variable to update the state space S, and calculates a current reward value through the reward function r, and then continuously adjusts the action value based on the input to continuously reduce the reward value.
Preferably, the target operation cost function Q is set in S3.8targetExpressed as:
Qtarget=[r+δmax Q*(s′,a′)/s,a]
wherein, δ is a discount factor, s and a are the current state variables and actions, s 'and a' are the state and actions at the next moment, δ max Q*(s ', a ')/s, a represents the maximum value of the maximum cost function estimated when the current state and action are s, a, respectively, and the state at the next time is s '.
Compared with the prior art, the technical scheme of the invention overcomes the defect that the prior art can only roughly estimate the viscosity characteristic of the pneumatic executive valve in the control loop and cannot truly reflect the viscosity parameter, and in addition, the viscosity parameter estimation based on the method is easy to realize and has stronger convergence capability.
Drawings
The invention may be better understood by reference to the following drawings. The components in the figures are not to be considered as drawn to scale, emphasis instead being placed upon illustrating the principles of the invention.
FIG. 1 is a schematic flow chart of a control valve viscosity parameter estimation method based on deep Q learning according to the present invention;
fig. 2 is a simplified diagram of the deep Q learning algorithm of the present invention.
Detailed Description
For the purpose of facilitating an understanding and practicing the invention by those of ordinary skill in the art, the invention is described in further detail below with reference to the following detailed description of illustrative embodiments and drawings:
as shown in fig. 1, the invention provides a control valve viscosity parameter estimation method based on deep Q learning, which includes the following steps:
s1, completing the connection and communication building work of the control valve, the control valve controller, the signal collector and the computer; the computer is used for operating a viscous parameter estimation algorithm of the control valve and sending control data to the controller; the controller is used for controlling the action of the control valve; and the signal collector collects the input signal of the control valve and the output signal of the position of the valve rod.
S2, building a logic framework of a deep Q learning algorithm, and determining related parameters, wherein the method comprises the following specific steps:
s2.1 establishing a control valve nonlinear model containing viscosity characteristic parameters of the control valve as follows
uv(k)=Fstic(u(k),L,u(0),uv(k-1),L,uv(0),Jstic,Sstic)
Wherein, JsticAnd SsticFor viscous characteristic parameters, u (k) is the input signal to the control valve, uv(k) Outputting signals for controlling the position of the valve stem, FsticIs viscosity;
s2.2 setting a state space S in Q-learning, which contains the input signals u (k) of the control valves, and an action space A for describing the state transitions, which contains the incremental action, i.e. the viscous characteristic parameter JsticAnd SsticThe positive adjustment or the negative adjustment in the parameter estimation process of (1) is expressed as follows:
p=[p-μ,p+μ]
wherein, p is the parameter to be estimated, mu is the action step length, and the increment action is 2 in total, namely, mu is increased or reduced for the identification value of the current viscous characteristic parameter;
s2.3, determining a reward function r, wherein the reward function r is the difference between the current valve rod position and the target valve rod position and is recorded as:
r=uv(k)-uv *(k)
wherein u isv *(k) Indicating a target valve stem position;
s2.4, building a neural network framework, and calculating a value function Q (S, a) in the current parameter estimation process, wherein S and a are current state variables and actions, and the structure of the neural network is a back propagation neural network (BP), a Recurrent Neural Network (RNN) or a long short term memory neural network (LSTMRNN).
S3, running an algorithm to obtain an estimated value of the viscosity characteristic parameter, wherein the method comprises the following specific steps:
s3.1 initializing an experience pool space D and a viscous characteristic parameter JsticAnd SsticThe experience pool space D is a database for storing all acquired data, can acquire an infinite number of data samples and simultaneously trains data;
s3.2 entering Q-learning framework, starting the first round of iterative loop, first obtaining u (k), uv(k) The basic process of the Q-learning framework is that the signal collector continuously collects input and output parameters of the control valve, the algorithm obtains state variables to update the state space S, the current reward value is calculated through a reward function r, and then the action value is continuously adjusted based on the input to continuously reduce the reward value.
S3.3, selecting an absolute action mode, namely increasing mu or reducing mu for the identification value of the current viscous characteristic parameter;
s3.4, selecting a reward mode, namely determining a reward function;
s3.5 coupling of St,at,rt,st+1Storing and converting in an experience pool space D, and storing state values s at the time t and the time t +1tAnd st+1Storing and converting in an experience pool space D;
s3.6, randomly sampling small batches in the experience pool space D;
s3.7 evaluating the cost function Q (S, a) in the current parameter estimation process by adopting the neural network in S2.4 and recording as Qest(s,a);
S3.8 setting target action value function QtargetExpressed as:
Qtarget=[r+δmax Q*(s′,a′)/s,a]
wherein, δ is a discount factor, s and a are the current state variables and actions, s 'and a' are the state and actions at the next moment, δ max Q*(s ', a ')/s, a represents the maximum value of the maximum cost function estimated when the current state and action are s, a, respectively, and the state at the next time is s '.
S3.9, if the reward value is less than 20, directly quitting the current round, and if the reward value is more than or equal to 20, returning to S3.3;
and S3.10, after the current iteration is finished, recording the operation result to obtain the estimated value of the viscosity characteristic parameter.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, a first feature being "on," "above" or "over" a second feature includes the first feature being directly on or obliquely above the second feature, or simply indicating that the first feature is at a higher level than the second feature. A first feature being "under", beneath and "under" a second feature includes the first feature being directly under and obliquely under the second feature, or simply means that the first feature is at a lesser elevation than the second feature.
In the present invention, the terms "first", "second", third "and" fourth "are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" means two or more unless expressly limited otherwise.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A control valve viscosity parameter estimation method based on deep Q learning comprises the following steps:
s1, completing the connection and communication building work of the control valve, the control valve controller, the signal collector and the computer;
s2, building a deep Q learning algorithm logic framework and determining related parameters;
s3 runs an algorithm to obtain an estimate of the viscous characteristic parameter.
2. The method for estimating the viscosity parameter of the control valve based on the deep Q learning as claimed in claim 1, wherein the computer in S1 is used for operating a control valve viscosity parameter estimation algorithm and sending control data to a controller; the controller is used for controlling the action of the control valve; and the signal collector collects the input signal of the control valve and the output signal of the position of the valve rod.
3. The method for estimating the viscosity parameter of the control valve based on the deep Q learning as claimed in claim 1, wherein the step S2 specifically comprises the steps of:
s2.1 establishing a control valve nonlinear model containing viscosity characteristic parameters of the control valve as follows
uv(k)=Fstic(u(k),L,u(0),uv(k-1),L,uv(0),Jstic,Sstic)
Wherein, JsticAnd SsticFor viscous characteristic parameters, u (k) is the input signal to the control valve, uv(k) Outputting signals for controlling the position of the valve stem, FsticIs viscosity;
s2.2 setting a state space S in Q-learning, which contains the input signals u (k) of the control valves, and an action space A for describing the state transitions, which contains the incremental action, i.e. the viscous characteristic parameter JsticAnd SsticThe positive adjustment or the negative adjustment in the parameter estimation process of (1) is expressed as follows:
p=[p-μ,p+μ]
wherein, p is the parameter to be estimated, mu is the action step length, and the increment action is 2 in total, namely, mu is increased or reduced for the identification value of the current viscous characteristic parameter;
s2.3, determining a reward function r, wherein the reward function r is the difference between the current valve rod position and the target valve rod position and is recorded as:
r=uv(k)-uv *(k)
wherein u isv *(k) Indicating a target valve stem position;
s2.4, building a neural network framework, and calculating a value function Q (S, a) in the current parameter estimation process, wherein S and a are current state variables and actions.
4. The method of claim 3, wherein the neural network in S2.4 is constructed as a back propagation neural network (BP), a Recurrent Neural Network (RNN) or a long short term memory neural network (LSTMRNN).
5. The method for estimating the viscosity parameter of the control valve based on the deep Q learning as claimed in claim 1, wherein the step S3 specifically comprises the steps of:
s3.1 initializing an experience pool space D and a viscous characteristic parameter JsticAnd SsticThe experience pool space D is a database for storing all acquired data, can acquire an infinite number of data samples and simultaneously trains data;
s3.2 entering Q-learning framework, starting the first round of iterative loop, first obtaining u (k), uv(k) The observed value of (a);
s3.3, selecting an absolute action mode, namely increasing mu or reducing mu for the identification value of the current viscous characteristic parameter;
s3.4, selecting a reward mode, namely determining a reward function;
s3.5 coupling of St,at,rt,st+1Storing and converting in an experience pool space D, and storing state values s at the time t and the time t +1tAnd st+1Storing and converting in an experience pool space D;
s3.6, randomly sampling small batches in the experience pool space D;
s3.7 adoptsS2.4, evaluating a cost function Q (S, a) in the current parameter estimation process by the neural network and recording the value function Q (S, a) as Qest(s,a);
S3.8 setting target action value function Qtarget
S3.9, if the reward value is less than 20, directly quitting the current round, and if the reward value is more than or equal to 20, returning to S3.3;
and S3.10, after the current iteration is finished, recording the operation result to obtain the estimated value of the viscosity characteristic parameter.
6. The method for estimating the viscosity parameter of the control valve based on the deep Q learning of claim 5, wherein the basic process of the Q-learning framework in S3.2 is that the signal collector continuously collects input and output parameters of the control valve, the algorithm obtains a state variable to update the state space S, and calculates the current reward value through a reward function r, and then continuously adjusts the action value based on the input to continuously reduce the reward value.
7. The method for estimating the viscosity parameter of the control valve based on the deep Q learning as claimed in claim 5, wherein the target action cost function Q is set in S3.8targetExpressed as:
Qtarget=[r+δmaxQ*(s′,a′)/s,a]
wherein, δ is a discount factor, s and a are the current state variables and actions, s 'and a' are the state and actions of the next moment, δ maxQ*(s ', a ')/s, a represents the maximum value of the maximum cost function estimated when the current state and action are s, a, respectively, and the state at the next time is s '.
CN202110721418.1A 2021-06-28 2021-06-28 Control valve viscosity parameter estimation method based on deep Q learning Pending CN113552799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110721418.1A CN113552799A (en) 2021-06-28 2021-06-28 Control valve viscosity parameter estimation method based on deep Q learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110721418.1A CN113552799A (en) 2021-06-28 2021-06-28 Control valve viscosity parameter estimation method based on deep Q learning

Publications (1)

Publication Number Publication Date
CN113552799A true CN113552799A (en) 2021-10-26

Family

ID=78131067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110721418.1A Pending CN113552799A (en) 2021-06-28 2021-06-28 Control valve viscosity parameter estimation method based on deep Q learning

Country Status (1)

Country Link
CN (1) CN113552799A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107703761A (en) * 2017-11-14 2018-02-16 杭州电子科技大学 A kind of method of estimation of pneumatic control valve viscosity property parameter
CN111181618A (en) * 2020-01-03 2020-05-19 东南大学 Intelligent reflection surface phase optimization method based on deep reinforcement learning
CN112784493A (en) * 2021-01-27 2021-05-11 武汉轻工大学 Geographic space prediction method and system based on self-adaptive deep Q network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107703761A (en) * 2017-11-14 2018-02-16 杭州电子科技大学 A kind of method of estimation of pneumatic control valve viscosity property parameter
CN111181618A (en) * 2020-01-03 2020-05-19 东南大学 Intelligent reflection surface phase optimization method based on deep reinforcement learning
CN112784493A (en) * 2021-01-27 2021-05-11 武汉轻工大学 Geographic space prediction method and system based on self-adaptive deep Q network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YINYOUPOET: "深度强化学习之深度Q网络DQN详解", pages 1 - 2, Retrieved from the Internet <URL:https://www.flyai.com/article/arta56a5fd0e79836a033672045> *
丛雨: "气动执行阀粘滞特性建模与参数辨识方法研究", 中国优秀硕士论文全文数据库, no. 2, pages 37 - 57 *

Similar Documents

Publication Publication Date Title
CN111230887B (en) Industrial gluing robot running state monitoring method based on digital twin technology
CN112286043B (en) PID parameter setting method based on controlled object step response characteristic data
CN110222371B (en) Bayes and neural network-based engine residual life online prediction method
CN103399281B (en) Based on the ND-AR model of cycle life deterioration stage parameter and the cycle life of lithium ion battery Forecasting Methodology of EKF method
JP2019527413A (en) Computer system and method for performing root cause analysis to build a predictive model of rare event occurrences in plant-wide operations
US20050273296A1 (en) Neural network model for electric submersible pump system
CN110687800B (en) Data-driven self-adaptive anti-interference controller and estimation method thereof
US20210116899A1 (en) Parameterization of a component in an automation system
CN114216256B (en) Ventilation system air volume control method of off-line pre-training-on-line learning
AU2013327003A1 (en) Methods and apparatus for process device calibration
JP2021073629A (en) Control based on speed in controller to be updated non-periodically, method for controlling process, and process controller
CN114548311B (en) Hydraulic equipment intelligent control system based on artificial intelligence
CN113552799A (en) Control valve viscosity parameter estimation method based on deep Q learning
KR102222734B1 (en) Control output value providing system using virtual sensor
CN111673026B (en) Online control method and control system for pressing process of forging press
CN111679577B (en) Speed tracking control method and automatic driving control system of high-speed train
CN116739147A (en) BIM-based intelligent energy consumption management and dynamic carbon emission calculation combined method and system
CN116184830A (en) Cage type electric throttle valve opening control method
CN113821893B (en) Self-adaptive state estimation method for aero-engine servo actuation system
CN114239938A (en) State-based energy digital twin body construction method
US10394255B2 (en) Diagnostic device and method for monitoring frictional behavior in a control loop
CN110045716B (en) Method and system for detecting and diagnosing early fault of closed-loop control system
CN117444978B (en) Position control method, system and equipment for pneumatic soft robot
Yan et al. Remaining Useful Life Interval Prediction for Complex System Based on BiGRU Optimized by Log-Norm
CN116150601A (en) Air conditioner compressor fault diagnosis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination