CN113625560A - Loss rate control method and device for corn harvester, storage medium and equipment - Google Patents

Loss rate control method and device for corn harvester, storage medium and equipment Download PDF

Info

Publication number
CN113625560A
CN113625560A CN202110861876.5A CN202110861876A CN113625560A CN 113625560 A CN113625560 A CN 113625560A CN 202110861876 A CN202110861876 A CN 202110861876A CN 113625560 A CN113625560 A CN 113625560A
Authority
CN
China
Prior art keywords
fuzzy
layer
corn harvester
value
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110861876.5A
Other languages
Chinese (zh)
Inventor
赵博
陈凯康
汪凤珠
王鹏飞
郑永军
刘阳春
苑严伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Academy of Agricultural Mechanization Sciences
Original Assignee
Chinese Academy of Agricultural Mechanization Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Academy of Agricultural Mechanization Sciences filed Critical Chinese Academy of Agricultural Mechanization Sciences
Priority to CN202110861876.5A priority Critical patent/CN113625560A/en
Publication of CN113625560A publication Critical patent/CN113625560A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/043Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Automation & Control Theory (AREA)
  • Computational Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Feedback Control In General (AREA)

Abstract

A loss rate control method, a device, a storage medium and equipment of a corn harvester are provided, wherein the loss rate control method of the corn harvester comprises the following steps: constructing a double-input single-output fuzzy neural network controller, wherein the fuzzy neural network controller comprises a front-part network and a back-part network; performing off-line learning by adopting a genetic algorithm-particle swarm algorithm, and determining the weight and the membership function center and width to be learned in the fuzzy neural network controller by learning the prior operation data of the system; and performing online learning by adopting a BP algorithm, establishing a connection weight of the controller, detecting the rotating speed of the corn harvester grain recovery device in real time, adjusting adjustable parameters in the controller in real time by combining online learning, enabling the controller to adapt to the mechanical property change of the corn harvester grain recovery device, and tracking a corn harvest rate set value. The invention also provides a corresponding loss rate control method and device of the corn harvester, a storage medium and equipment.

Description

Loss rate control method and device for corn harvester, storage medium and equipment
Technical Field
The invention relates to a corn harvester grain loss control technology, in particular to a method and a device for controlling the loss rate of a corn harvester based on a fuzzy neural network algorithm, a storage medium and terminal computer equipment.
Background
The corn harvester system is a complex control object, has the characteristics of nonlinearity, large time lag, strong coupling and time variation, and is interfered by a plurality of uncertain factors, such as natural environment change, mechanical wear, motor rotating speed or human factors, which all cause the loss of corn harvesting. Wherein the motor speed can be optimized by an algorithm, thereby reducing the corn harvest loss.
The intelligent control has self-learning and self-adaptive capabilities, has a better control effect on linear and nonlinear systems, and can well solve the problem of control of a complex system of the corn harvester. Among them, neural network and fuzzy control are two important branches of intelligent control. The neural network is an operation model simulating the structure and function of the biological neural network, is formed by connecting a large number of neurons, and is a nonlinear dynamical system. The neural network has the nonlinear approximation capability, the learning capability, the self-adaption capability and the fault-tolerant capability. However, neural networks are not suitable for expressing rule-based knowledge. The fuzzy control simulates human thinking by fuzzy logic and reasoning and carries out knowledge processing, is based on the control of language type control rules, and is very suitable for control objects with difficult mastering or obvious change of dynamic characteristics. However, part of the information is lost due to the increase of the ambiguity, and the learning and the establishment of a perfect control rule are difficult, so that the self-adaptive capability is lacked. The fuzzy neural network-based network is widely applied due to the simple principle, strong applicability and strong robustness. However, fuzzy-based neural networks perform poorly in controlling complex processes that are non-linear, time-varying, coupled, and parametric and mechanistic in nature.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method, a device, a storage medium and a terminal device for controlling the loss rate of a corn harvester based on a fuzzy neural network algorithm, so as to control a grain recovery system of the corn harvester in real time to reduce the corn harvesting loss.
In order to achieve the above object, the present invention provides a method for controlling the loss rate of a corn harvester, comprising the following steps:
s100, constructing a double-input single-output fuzzy neural network controller, wherein the fuzzy neural network controller comprises a front-part network and a back-part network;
s200, performing off-line learning by adopting a genetic algorithm-particle swarm algorithm, and preliminarily determining the weight and the membership function center and width to be learned in the fuzzy neural network controller through learning of previous operation data of the system; and
s300, performing online learning by adopting a BP algorithm, establishing a connection weight of the fuzzy neural network controller, detecting the rotating speed of the corn harvester grain recovery device in real time, adjusting adjustable parameters in the fuzzy neural network controller in real time by combining the online learning, enabling the fuzzy neural network controller to adapt to the mechanical property change of the corn harvester grain recovery device, and tracking a set value of the corn harvesting rate.
The method for controlling the loss rate of the corn harvester, wherein the front-end part network is of a four-layer network structure and comprises the following steps:
the input layer takes the active power deviation and the active power deviation change rate of the input corn harvester as y1,y2Number of nodes N of the input layer12, the expression is as follows:
Figure BDA0003186018490000021
where e is the tracking error,
Figure BDA0003186018490000022
the tracking error change rate of the performance parameter variable, c is a mechanical performance change value, and y is a mechanical performance actual detection value.
Fuzzification layer to input variable y1、y2Dividing the fuzzy layer into 7 fuzzy subsets { NB, NM, NS, O, PS, PM, PB }, wherein each fuzzy subset is used as a node of the fuzzy layer, and each node represents a language variable value; the membership functions of the linguistic variables are Gaussian functions, and the membership functions of the linguistic variables are respectively as follows:
Figure BDA0003186018490000023
wherein,
Figure BDA0003186018490000024
membership functions, y, of fuzzy universe of values of corresponding linguistic variables for inputs of the first layeriTo input the active power deviation of the corn harvester, cijAnd σij(i=1,2,…,n,j=1,2,…mi) Respectively the center and the width of the membership function, n is the number of input variables, miAs an input variable xiN is 2, m1=m27; number of nodes N of the blurring layer2=m1+m2=14;
And the fuzzy rule calculation layer is used for matching the antecedents of the fuzzy rules and calculating the fitness of each fuzzy rule:
Figure BDA0003186018490000031
wherein alpha ismFor operating on fuzzy reasoningCalculating the result of the calculation, and then calculating the result,
Figure BDA0003186018490000032
is a value of the degree of membership,
Figure BDA0003186018490000033
is the next node membership value, j1=j2=1,2,…,7;m=m1×m27 × 7 ═ 49, the fuzzy rule calculates the number of nodes N of the layer3=49;
The fourth normalization layer is used for realizing normalization operation;
Figure BDA0003186018490000034
wherein,
Figure BDA0003186018490000035
as a weighting coefficient, αiThe number of nodes N of the normalization layer for weighting factor accumulation4=N3=49。
The method for controlling the loss rate of the corn harvester, wherein the back-part network is of a three-layer network structure, and comprises the following steps:
the first layer is used for transmitting the input variable to the next layer, the first layer has 3 nodes, and the input value of the first node is x01, constant terms in the fuzzy rule postscript are provided; the second and third nodes respectively input x1,x2
And the second layer is used for calculating the postscript of each rule, wherein the total number of the nodes is 49, and each node represents one rule:
Figure BDA0003186018490000036
wherein, ymFor the back-end network of the mth rule,
Figure BDA0003186018490000037
to connect toThe weight, k is 0,1,2, m is 1,2,3, …,49, x1Rule of corresponding node, x2Corresponding the rule for the second node;
a third layer for computing the controller output y:
Figure BDA0003186018490000038
wherein alpha ismIs a weighting coefficient, i.e. normalized fitness, y, of each fuzzy rulemAnd the output of the front-part network is the connection weight of the back-part network.
In the method for controlling the loss rate of the corn harvester, the offline learning in the step S200 includes the following steps:
s201, initializing population parameters, wherein the population parameters are initial positions of all particles;
s202, calculating the particle fitness F: f ═ abs (y-c), where y is the predicted output and c is the desired output;
s203, searching individual extremum and group extremum, and finding out individual minimum fitness value and global minimum fitness value of each particle;
s204, calculating the speed update and the position update of the particles by adopting the following formulas:
Figure BDA0003186018490000041
Figure BDA0003186018490000042
wherein ω is the inertial weight; d ═ 1,2, …, D; 1,2, …, n; k is the current iteration number; vidIs the velocity of the particle, PidTo desired output, XidIs the actual output; c. C1And c2Is an acceleration factor, which is a non-negative constant; r is1And r2Is distributed in [0,1 ]]The random number of (2);
s205, calculating the particle fitness after the speed and the position are updated according to the formula in the step S202;
s206, updating the individual extremum and the group extremum according to the formula in the step S204;
s207, calculating the intersection of the current individual and the extreme value of the individual by adopting the following formula, and if the fitness value is reduced, accepting:
xij=xij(1-b)+Pijb, in the formula, xijFor randomly selected individual extrema, PijFor randomly selected population extrema, b is [0,1 ]]A random number in between;
s208, calculating the intersection of the current individual and the group extreme value by adopting the following formula, and if the fitness value is reduced, accepting:
xij=xij(1-b)+Pgjin the formula, xijFor randomly selected individual extrema, PgjFor randomly selected population extrema, b is [0,1 ]]A random number in between;
s209, calculating the current individual to perform mutation by adopting the following formula, and if the fitness value is reduced, accepting:
Figure BDA0003186018490000043
in the formula, xijFor randomly selected individual extrema, xmaxIs xijThe upper bound of (c); x is the number ofminIs xijThe lower bound of (c); f (g) r2(1-g/Gmax)2;r2Is a random number; g is the current iteration number; gmaxIs the maximum number of evolutions; r is [0,1 ]]The random number of (2);
and S210, finishing when the maximum evolution algebra is met, otherwise, returning to the step S204.
In the method for controlling the loss rate of the corn harvester, the adjustable parameters to be learned include the connection weight and the central value c of the membership functionijAnd width σij
In the method for controlling the loss rate of the corn harvester, the learning algorithm of the connection weight is as follows:
Figure BDA0003186018490000051
center value c of the membership functionijThe learning algorithm is as follows:
cij(τ+1)=cij(τ)+Δcij(τ+1)+υ(cij(τ)-cij(τ-1));
the width σijThe learning algorithm is as follows:
σij(τ+1)=σij(τ)+Δσij(τ+1)+υ(σij(τ)-σij(τ-1));
wherein i is 1,2, j is 1,2,3, …,7, T denotes a time, T +1 denotes a next time, T-1 denotes a previous time, u is a momentum factor,
Figure BDA0003186018490000052
Figure BDA0003186018490000053
e is an error cost function, and eta is a learning rate.
The method for controlling the loss rate of the corn harvester is characterized in that the error cost function E is as follows:
Figure BDA0003186018490000054
where c is the desired output and y is the actual output.
In order to better achieve the purpose, the invention also provides a corn harvester loss rate control device, which comprises a fuzzy neural network controller, wherein the fuzzy neural network controller comprises a front network and a back network, and the corn harvester loss rate control method is adopted to realize the reduction of the corn harvesting loss rate through the rotation speed optimization control of the corn harvester grain recovery device.
In order to better achieve the above object, the present invention further provides a storage medium, wherein the storage medium stores a computer program, and the computer program is configured to execute the corn harvester loss rate control method when running.
In order to better achieve the above object, the present invention further provides an electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the corn harvester loss rate control method described above via execution of the executable instructions.
The invention has the technical effects that:
the invention is based on the fuzzy neural network algorithm, integrates the learning and calculating functions of the neural network into the fuzzy system, embeds the IF-Then rule of the fuzzy system human into the neural network, improves the self-adapting capability of the fuzzy control system while keeping the strong knowledge expression capability of the fuzzy control system, has self-learning capability, and realizes the reduction of the corn harvesting loss rate by optimizing the neural network algorithm of the motor rotating speed control of the seed recovery device of the corn planter.
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
Drawings
FIG. 1 is a schematic diagram of a method for controlling the loss rate of a corn harvester according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a fuzzy neural network according to an embodiment of the present invention;
FIG. 3 is a flowchart of the GA-PSO offline learning algorithm according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail with reference to the following drawings, which are provided for illustration purposes and the like:
example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Referring to fig. 1, fig. 1 is a schematic diagram of a loss rate control method of a corn harvester according to an embodiment of the present invention. As shown in figure 1, the method for controlling the loss rate of the corn harvester of the invention is based on a fuzzy neural network, a double-input single-output fuzzy neural network controller is constructed, the adjustable parameters in the controller are adjusted in real time by detecting the set values of the rotating speed tracking output and the standard rotating speed of the grain recovery device in real time and combining an online learning mechanism, so that the adjustable parameters adapt to the change of the grain recovery device and track the set value of the standard rotating speed. The fuzzy neural network controller is a two-dimensional structure with double input and single output, and e and
Figure BDA0003186018490000071
respectively, the input tracking error and the rate of change of the tracking error, u (t) represents the output control quantity, where du/dt represents the function of the post-shift operator, i.e. to find u (t-1), the last momentAnd (5) controlling the quantity. FNN denotes a fuzzy neural network controller. K represents the proportionality coefficient of the fuzzy neural network controller, and the parameter is continuously adjusted according to the operation result.
The learning stage of the fuzzy neural network controller is divided into an off-line learning stage and an on-line learning stage. The off-line learning stage is to preliminarily determine the center and the width of the weight and membership function to be learned in the fuzzy neural network by learning the previous system operation data. The determined values are not very accurate, and then the controller which preliminarily determines the parameter structure is accurately adjusted through a BP algorithm in an online learning stage, so that the control performance is better. The off-line learning adopts an improved particle swarm algorithm, namely a genetic algorithm-a particle swarm algorithm, and the on-line learning is a BP algorithm. The BP algorithm relies too heavily on network initial values, and poor initial values may result in poor results or no convergence at all. In addition, the BP algorithm has poor global searching capability and is easy to fall into local minimum. The PSO (Particle Swarm Optimization) is combined with the BP algorithm, so that the global convergence of learning can be guaranteed, the problems of dependence and local convergence of a gradient method on an initial value can be solved, and the problems of randomness and probability caused by a pure Particle Swarm Optimization are solved.
The method specifically comprises the following steps:
s100, constructing a double-input single-output fuzzy neural network controller, wherein the fuzzy neural network controller comprises a front piece network and a back piece network;
s200, performing off-line learning by adopting a genetic algorithm-particle swarm algorithm, preliminarily determining a weight value and a membership function center and width to be learned in the fuzzy neural network controller through learning of previous operation data of the system, wherein the previous operation data of the system can be used for setting a motor speed regulation model, driving system input quantity and the like, and setting an initial speed of the motor; and
step S300, performing online learning by adopting a BP algorithm, establishing a connection weight of the fuzzy neural network controller, detecting the rotating speed of the corn harvester grain recovery device in real time, adjusting adjustable parameters in the fuzzy neural network controller in real time by combining the online learning, enabling the fuzzy neural network controller to adapt to the mechanical property change of the corn harvester grain recovery device, and tracking a set value of the corn harvesting rate.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a fuzzy neural network according to an embodiment of the present invention. The fuzzy neural network of the embodiment is composed of a front piece network and a back piece network, wherein the front piece network is used for matching the front piece of the fuzzy rule, and the back piece network is used for generating the back piece of the fuzzy rule. The front-end network is a four-layer network structure, including:
a first layer: the input layer takes the active power deviation and the active power deviation change rate of the input corn harvester as y1,y2Number of nodes N of the input layer12, the expression is as follows:
Figure BDA0003186018490000081
where e is the tracking error,
Figure BDA0003186018490000082
tracking error change rate for performance parameter variable, c being mechanical performance change value, y being actual detection value of mechanical performance;
fuzzification layer to input variable y1、y2Dividing the fuzzy layer into 7 fuzzy subsets { NB, NM, NS, O, PS, PM, PB }, wherein each fuzzy subset is used as a node of the fuzzy layer, and each node represents a language variable value; the membership functions of the linguistic variables are Gaussian functions, and the membership functions of the linguistic variables are respectively as follows:
Figure BDA0003186018490000083
wherein,
Figure BDA0003186018490000084
membership functions, y, of fuzzy universe of values of corresponding linguistic variables for inputs of the first layeriTo input the active power deviation of the corn harvester, cijAnd σij(i=1,2,…,n,j=1,2,…mi) Respectively the center and the width of the membership function, n is the number of input variables, miAs an input variable xiN is 2, m1=m27; number of nodes N of the blurring layer2=m1+m2=14;
And a third layer: a fuzzy rule calculation layer for matching the antecedents of the fuzzy rules and calculating the fitness of each fuzzy rule due to y1,y2Two inputs, so the fuzzy inference operation is to multiply the two fuzzified input quantities, and the adopted fuzzy operator is the multiplication operator:
Figure BDA0003186018490000085
wherein alpha ismIn order to compute the result of the fuzzy inference operation,
Figure BDA0003186018490000086
is a value of the degree of membership,
Figure BDA0003186018490000087
is the next node membership value, j1=j2=1,2,…,7;,m=m1×m27 × 7 ═ 49, the fuzzy rule calculates the number of nodes N of the layer3=49;
A fourth layer: the normalization layer is used for realizing normalization operation;
Figure BDA0003186018490000091
wherein,
Figure BDA0003186018490000092
as a weighting coefficient, αiThe number of nodes N of the normalization layer for weighting factor accumulation4=N3The number of nodes in this layer is equal to the number of nodes in the third layer, 49.
The back-up network of this embodiment is a three-layer network structure, including:
a first layer, an input layer, for transferring input variables to a next layer, i.e. a second layer, the first layer having 3 nodes in total, the input value of a first node in the input layer being x01, its role is to provide a constant term in the fuzzy rule back-piece; the second and third nodes respectively input x1,x2
And the second layer is used for calculating the postscript of each rule, wherein the total number of the nodes is 49, and each node represents one rule:
Figure BDA0003186018490000093
wherein, ymFor the back-end network output of the mth rule,
Figure BDA0003186018490000095
for the connection weight, k is 0,1,2, m is 1,2,3, …,49, x1Rule of corresponding node, x2Corresponding the rule for the second node;
a third layer for computing the controller output y:
Figure BDA0003186018490000094
wherein alpha ismIs a weighting coefficient, i.e. normalized fitness, y, of each fuzzy rulemAnd the fuzzy neural network output y is the weighting of each fuzzy rule back piece, the weighting coefficient is the utilization degree of each fuzzy rule after normalization, namely the output of the front piece network is the connection weight of the back piece network.
An off-line learning stage of the T-S fuzzy neural network learning algorithm: the off-line learning stage is to provide an excellent network initial value for on-line learning, and the initial value is better than the optimal training effect because the on-line learning algorithm, namely the BP algorithm, is very dependent on the selection of the initial value. The offline learning adopts an improved particle swarm algorithm, namely a GA-PSO algorithm.
The particle swarm algorithm is a group intelligent optimization algorithm, which was first proposed by Kennedy and Eberhart in 1995. It is derived from the study of the behavior of birds that prey on, each bird finds its food by searching the surrounding area of the bird that is currently closest to the food.
The standard particle swarm optimization firstly randomly initializes a group of particles in a feasible solution space, each particle represents a potential optimal solution of the extremum optimization problem, the characteristics of the particle are represented by three indexes of position, speed and fitness value, the fitness value is calculated by a fitness function, and the quality of the value represents the quality of the particle. The particles move in the solution space, the positions of the individuals are updated by tracking individual extremum Pbest and group extremum Gbest, the individual extremum Pbest represents the position with the optimal fitness value in the positions where the individuals experience, and the group extremum Gbest represents the position with the optimal fitness value searched by all the particles in the population.
Suppose a population X of n particles in a D-dimensional search spacei=(x1,x2…xn) Wherein the ith particle is represented as a vector X of dimension Di=(xi1,xi2...xiD)TRepresenting the position of the ith particle in the D-dimensional search space, and also represents a potential solution to the problem. Velocity of the ith particle is Vi=(vi1,vi2...viD)TIts individual extremum Pi=(p1,p2...pn)TGlobal extremum P of populationg=(pg1,pg2...pgn)T
The speed and position updating formula of the standard particle swarm algorithm is as follows:
Figure BDA0003186018490000101
Figure BDA0003186018490000102
in the formula, omega is an inertia weight; d ═ 1,2, …, D; 1,2, …, n; k is the current iteration number; vidIs the velocity of the particle, PidTo desired output, XidIs the actual output; c. C1And c2Is an acceleration factor, which is a non-negative constant; r is1And r2Is distributed in [0,1 ]]The random number of (2).
The fitness function adopted by the invention is that the absolute value of the error between the predicted output and the expected output is taken as an individual fitness value F, and the calculation formula is as follows:
F=abs(y-c); (9)
where y is the predicted output and c is the desired output.
The standard particle swarm algorithm completes extremum optimization by following individual extrema and group extrema, although the operation is simple and the convergence can be fast, along with the increment of iteration times, all particles are more and more similar when the group convergence is concentrated, and the particles can not jump out near the local optimal solution. The hybrid particle swarm algorithm abandons a method for updating the positions of particles by tracking extreme values in a standard particle swarm algorithm, introduces intersection and variation operations in a genetic algorithm, and searches an optimal solution by the intersection of the particles with individual extreme values and group extreme values and the variation of the particles.
The crossing operation adopts a real number crossing method, randomly selects the ith individual extreme value and the group extreme value as well as the crossing bit j of the ith individual, and then crosses, wherein the method comprises the following steps:
xij=xij(1-b)+Pijb; (10)
xij=xij(1-b)+Pgj; (11)
in the formula, xijFor randomly selected individual extrema, PijFor randomly selected population extrema, b is [0,1 ]]Random number in between.
The particle self variation operation randomly selects the ith individual variation position j, and then performs variation, and the method comprises the following steps:
Figure BDA0003186018490000111
in the formula, xmaxIs xiiThe upper bound of (c); x is the number ofminIs xijThe lower bound of (c); f (g) r2(1-g/Gmax)2;r2Is a random number; g is the current iteration number; gmaxIs the maximum number of evolutions; r is [0,1 ]]The random number of (2).
Referring to fig. 3, fig. 3 is a flowchart of the GA-PSO offline learning algorithm according to an embodiment of the present invention. In this embodiment, the offline learning in step S200 includes the following steps:
step S201, initializing population parameters, wherein the population parameters are initial positions of all particles;
step S202, calculating the particle fitness F: f ═ abs (y-c), where y is the predicted output and c is the desired output;
s203, searching an individual extreme value and a group extreme value, and finding out an individual minimum fitness value and a global minimum fitness value of each particle;
step S204, calculating the speed update and the position update of the particles by adopting the following formulas:
Figure BDA0003186018490000112
Figure BDA0003186018490000121
wherein ω is the inertial weight; d ═ 1,2, …, D; 1,2, …, n; k is the current iteration number; vidIs the velocity of the particle; c. C1And c2Is an acceleration factor, which is a non-negative constant; r is1And r2Is distributed in [0,1 ]]The random number of (2);
step S205, calculating the particle fitness after the speed and the position are updated according to the formula in the step S202;
step S206, updating the individual extremum and the group extremum according to the formula in the step S204;
step S207, the crossing of the current individual and the extreme value of the individual is calculated by adopting the following formula, and if the fitness value is reduced, the crossing is accepted:
xij=xij(1-b)+Pijb, in the formula, b is [0,1 ]]A random number in between;
step S208, calculating the intersection of the current individual and the group extreme value by adopting the following formula, and if the fitness value is reduced, accepting:
xij=xij(1-b)+Pgjin the formula, xijFor randomly selected individual extrema, PgjFor randomly selected population extrema, b is [0,1 ]]A random number in between;
step S209, calculating the current individual to perform mutation by adopting the following formula, and if the fitness value is reduced, accepting:
Figure BDA0003186018490000122
in the formula, xijFor randomly selected individual extrema, xmaxIs xiiThe upper bound of (c); x is the number ofminIs xijThe lower bound of (c); f (g) r2(1-g/Gmax)2;r2Is a random number; g is the current iteration number; gmaxIs the maximum number of evolutions; r is [0,1 ]]The random number of (2);
and step S210, finishing when the maximum evolution algebra is met, otherwise, returning to the step S204.
Wherein the adjustable parameters to be learned comprise connection weight values and central values c of membership functionsijAnd width σij
The learning algorithm of the connection weight is as follows:
Figure BDA0003186018490000123
center value c of the membership functionijThe learning algorithm is as follows:
cij(τ+1)=cij(τ)+Δcij(τ+1)+υ(cij(τ)-cij(τ-1));
the width σijThe learning algorithm is as follows:
σij(τ+1)=σij(τ)+Δσij(τ+1)+υ(σij(τ)-σij(τ-1));
wherein i is 1,2, j is 1,2,3, …,7, T denotes a time, T +1 denotes a next time, T-1 denotes a previous time, u is a momentum factor,
Figure BDA0003186018490000131
Figure BDA0003186018490000132
e is an error cost function, and eta is a learning rate.
The error cost function E is:
Figure BDA0003186018490000133
where c is the desired output and y is the actual output.
An online learning stage of the T-S fuzzy neural network learning algorithm: BP algorithm is adopted for online learning.
Because the fuzzy division number of each input variable is predetermined, the parameters to be learned are mainly the connection weight of the back-part network and the central value c of the Gaussian membership functionijAnd width sigmaij
Defining an error cost function E as:
Figure BDA0003186018490000134
where c is the desired output and y is the actual output.
Connection weight
Figure BDA0003186018490000135
The learning algorithm of (1):
in the learning stage of the controller, the offline learning result is not satisfactory, so that the BP algorithm in the online learning stage is adopted to further improve the fuzzy neural networkThe weight required in the network and the center and width of the membership function. The membership function of the input component adopts a Gaussian function, and the parameter to be learned is a central value c connecting the weight and the membership functionijAnd width σij
Figure BDA0003186018490000136
Figure BDA0003186018490000137
Where τ denotes a time, τ +1 denotes a next time, and τ -1 denotes a previous time.
Then, the central value c is discussedijAnd width σijThe learning algorithm of (1). At this moment, the connection weight value
Figure BDA0003186018490000138
Are known.
Figure BDA0003186018490000139
Figure BDA0003186018490000141
Figure BDA0003186018490000142
Figure BDA0003186018490000143
In the above formulas, η > 0 is the learning rate.
The standard BP algorithm has the problems of low convergence speed and local minimum of an objective function. There are several methods available today to improve upon the above problem, two of the more common methods: BP calculation when momentum term is introducedThe method can find a better solution; when the adaptive learning rate is introduced, the BP algorithm can properly shorten the training time. So that the text combines the two, at this time, the connection weight
Figure BDA0003186018490000144
Center value cijAnd width σijThe learning algorithms of (1) are respectively:
Figure BDA0003186018490000145
cij(τ+1)=cij(τ)+Δcij(τ+1)+υ(cij(τ)-cij(τ-1)); (20)
σij(τ+1)=σij(τ)+Δσij(τ+1)+υ(σij(τ)-σij(τ-1)); (21)
wherein i is 1,2, j is 1,2,3, …,7, T denotes a time, T +1 denotes a next time, T-1 denotes a previous time, u is a momentum factor,
Figure BDA0003186018490000146
e is an error cost function, and eta is a learning rate.
Although the steps of the method of the present invention are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In addition, the invention also provides a corn harvester loss rate control device which comprises a fuzzy neural network controller, wherein the structure of the double-input single-output fuzzy neural network controller can be determined according to fuzzy rules and the physical significance thereof. The fuzzy neural network controller comprises a front network and a back network, and the corn harvesting loss rate is reduced by adopting the method for controlling the loss rate of the corn harvester and optimally controlling the rotating speed of the grain recovery device of the corn harvester. It should be noted that the apparatus is for action execution and that it may have its functions implemented by several modules or units. Indeed, the features and functionality of two or more modules or units may be embodied in one module or unit in accordance with embodiments of the invention. Conversely, the features and functions of one module or unit may be further divided into embodiments by a plurality of modules or units.
Accordingly, based on the same inventive concept, the invention also provides a storage medium storing a computer program configured to execute the corn harvester loss rate control method when running. Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (such as a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (such as a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiment of the present invention.
In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device. Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Accordingly, based on the same inventive concept, the present invention also provides an electronic device, comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the corn harvester loss rate control method described above via execution of the executable instructions.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. The electronic device of the present embodiment is expressed in the form of a general-purpose computing device. Components of the electronic device may include, but are not limited to: the at least one processor, the at least one memory, and a bus connecting the various system components (including the memory and the processor). The memory is used for storing executable instructions of the processor; the processor is configured to perform the corn harvester loss rate control method described above via execution of the executable instructions. Wherein the memory stores program code executable by the processor to cause the processor to perform steps according to various exemplary embodiments of the present invention as described in the "exemplary methods" section above.
The memory may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM) and/or a cache memory unit, and may further include a read only memory unit (ROM).
The memory may also include a program/utility having a set (at least one) of program modules including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The bus may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device may also communicate with one or more external devices (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface. Also, the electronic device may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via a network adapter. The network adapter communicates with other modules of the electronic device over the bus. Other hardware and/or software modules may be used in conjunction with the electronic device, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
The invention is based on the fuzzy neural network algorithm, integrates the learning and calculating functions of the neural network into the fuzzy system, embeds the IF-Then rule of the fuzzy system human into the neural network, improves the self-adapting capability of the fuzzy control system while keeping the strong knowledge expression capability of the fuzzy control system, has self-learning capability, and realizes the reduction of the corn harvesting loss rate by optimizing the neural network algorithm of the motor rotating speed control of the seed recovery device of the corn planter.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it should be understood that various changes and modifications can be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A loss rate control method of a corn harvester is characterized by comprising the following steps:
s100, constructing a double-input single-output fuzzy neural network controller, wherein the fuzzy neural network controller comprises a front-part network and a back-part network;
s200, performing off-line learning by adopting a genetic algorithm-particle swarm algorithm, and preliminarily determining the weight and the membership function center and width to be learned in the fuzzy neural network controller through learning of previous operation data of the system; and
s300, performing online learning by adopting a BP algorithm, establishing a connection weight of the fuzzy neural network controller, detecting the rotating speed of the corn harvester grain recovery device in real time, adjusting adjustable parameters in the fuzzy neural network controller in real time by combining the online learning, enabling the fuzzy neural network controller to adapt to the mechanical property change of the corn harvester grain recovery device, and tracking a set value of the corn harvesting rate.
2. The method of claim 1, wherein the front-end network is a four-tier network comprising:
the input layer takes the active power deviation and the active power deviation change rate of the input corn harvester as y1,y2Number of nodes N of the input layer12, the expression is as follows:
Figure FDA0003186018480000011
where e is the tracking error,
Figure FDA0003186018480000012
tracking error change rate for performance parameter variable, c being mechanical performance change value, y being actual detection value of mechanical performance;
fuzzification layer to input variable y1、y2Dividing the fuzzy layer into 7 fuzzy subsets { NB, NM, NS, O, PS, PM, PB }, wherein each fuzzy subset is used as a node of the fuzzy layer, and each node represents a language variable value; the membership functions of the linguistic variables are Gaussian functions, and the membership functions of the linguistic variables are respectively as follows:
Figure FDA0003186018480000013
wherein,
Figure FDA0003186018480000014
membership functions, y, of fuzzy universe of values of corresponding linguistic variables for inputs of the first layeriTo input the active power deviation of the corn harvester, cijAnd σij(i=1,2,…,n,j=1,2,…mi) Respectively the center and the width of the membership function, n is the number of input variables, miAs an input variable xiN is 2, m1=m27; number of nodes N of the blurring layer2=m1+m2=14;
And the fuzzy rule calculation layer is used for matching the antecedents of the fuzzy rules and calculating the fitness of each fuzzy rule:
Figure FDA0003186018480000021
wherein alpha ismIn order to compute the result of the fuzzy inference operation,
Figure FDA0003186018480000022
is a value of the degree of membership,
Figure FDA0003186018480000023
is the next node membership value, j1=j2=1,2,…,7;,m=m1×m27 × 7 ═ 49, the fuzzy rule calculates the number of nodes N of the layer3=49;
The normalization layer is used for realizing normalization operation;
Figure FDA0003186018480000024
wherein,
Figure FDA0003186018480000025
as a weighting coefficient, αiThe number of nodes N of the normalization layer for weighting factor accumulation4=N3=49。
3. The method of claim 2, wherein the back-piece network is a three-tier network comprising:
the first layer is used for transmitting the input variable to the next layer, the first layer has 3 nodes, and the input value of the first node is x01 for providing blurringConstant terms in the rule back-part; the second and third nodes respectively input x1,x2
And the second layer is used for calculating the postscript of each rule, wherein the total number of the nodes is 49, and each node represents one rule:
Figure FDA0003186018480000026
wherein, ymFor the back-end network output of the mth rule,
Figure FDA0003186018480000027
for the connection weight, k is 0,1,2, m is 1,2,3, …,49, x1Rule of corresponding node, x2Corresponding the rule for the second node;
a third layer for computing the controller output y:
Figure FDA0003186018480000028
wherein alpha ismIs a weighting coefficient, i.e. normalized fitness, y, of each fuzzy rulemAnd the output of the front-part network is the connection weight of the back-part network.
4. The method of claim 3, wherein the off-line learning of step S200 comprises the steps of:
s201, initializing population parameters, wherein the population parameters are initial positions of all particles;
s202, calculating the particle fitness F: f ═ abs (y-c), where y is the predicted output and c is the desired output;
s203, searching individual extremum and group extremum, and finding out individual minimum fitness value and global minimum fitness value of each particle;
s204, calculating the speed update and the position update of the particles by adopting the following formulas:
Figure FDA0003186018480000031
Figure FDA0003186018480000032
wherein ω is the inertial weight; d ═ 1,2, …, D; 1,2, …, n; k is the current iteration number; vidIs the velocity of the particle, PidTo desired output, XidIs the actual output; c. C1And c2Is an acceleration factor, which is a non-negative constant; r is1And r2Is distributed in [0,1 ]]The random number of (2);
s205, calculating the particle fitness after the speed and the position are updated according to the formula in the step S202;
s206, updating the individual extremum and the group extremum according to the formula in the step S204;
s207, calculating the intersection of the current individual and the extreme value of the individual by adopting the following formula, and if the fitness value is reduced, accepting:
xij=xij(1-b)+Pijb, in the formula, xijFor randomly selected individual extrema, PijFor randomly selected population extrema, b is [0,1 ]]A random number in between;
s208, calculating the intersection of the current individual and the group extreme value by adopting the following formula, and if the fitness value is reduced, accepting:
xij=xij(1-b)+Pgjin the formula, xijFor randomly selected individual extrema, PgjFor randomly selected population extrema, b is [0,1 ]]A random number in between;
s209, calculating the current individual to perform mutation by adopting the following formula, and if the fitness value is reduced, accepting:
Figure FDA0003186018480000033
in the formula, xijFor randomly selected individual extrema, xmaxIs xijThe upper bound of (c); x is the number ofminIs xijThe lower bound of (c); f (g) r2(1-g/Gmax)2;r2Is a random number; g is the current iteration number; gmaxIs the maximum number of evolutions; r is [0,1 ]]The random number of (2);
and S210, finishing when the maximum evolution algebra is met, otherwise, returning to the step S204.
5. The method of claim 4, wherein the adjustable parameters to be learned include the connection weight and the center value c of the membership functionijAnd width σij
6. The method of claim 5, wherein the learning algorithm of the connection weight is:
Figure FDA0003186018480000041
center value c of the membership functionijThe learning algorithm is as follows:
cij(τ+1)=cij(τ)+Δcij(τ+1)+υ(cij(τ)-cij(τ-1));
the width σijThe learning algorithm is as follows:
σij(τ+1)=σij(τ)+Δσij(τ+1)+υ(σij(τ)-σij(τ-1));
where i is 1,2, j is 1,2,3, …,7, τ denotes a time, τ +1 denotes a next time, τ -1 denotes a previous time, and ν is a momentum factor,
Figure FDA0003186018480000042
Figure FDA0003186018480000043
e is an error cost function, and eta is a learning rate.
7. The method of claim 6, wherein the error cost function E is:
Figure FDA0003186018480000044
where c is the desired output and y is the actual output.
8. A corn harvester loss rate control device is characterized by comprising a fuzzy neural network controller, wherein the fuzzy neural network controller comprises a front network and a back network, and the corn harvester loss rate control method of any one of claims 1 to 7 is adopted to realize the reduction of the corn harvesting loss rate through the rotation speed optimization control of a corn harvester grain recovery device.
9. A storage medium storing a computer program configured to perform the corn harvester loss rate control method of any one of claims 1-7 when the computer program is executed.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the corn harvester loss rate control method of any of the above claims 1-7 via execution of the executable instructions.
CN202110861876.5A 2021-07-29 2021-07-29 Loss rate control method and device for corn harvester, storage medium and equipment Pending CN113625560A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110861876.5A CN113625560A (en) 2021-07-29 2021-07-29 Loss rate control method and device for corn harvester, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110861876.5A CN113625560A (en) 2021-07-29 2021-07-29 Loss rate control method and device for corn harvester, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN113625560A true CN113625560A (en) 2021-11-09

Family

ID=78381457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110861876.5A Pending CN113625560A (en) 2021-07-29 2021-07-29 Loss rate control method and device for corn harvester, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN113625560A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004339A (en) * 2021-11-12 2022-02-01 广东海洋大学 Width learning-based urban lighting system adjusting method and device and storage medium
CN118625682A (en) * 2024-08-15 2024-09-10 绍兴达伽马纺织有限公司 Energy-saving control method and system for textile manufacturing
CN118625682B (en) * 2024-08-15 2024-10-22 绍兴达伽马纺织有限公司 Energy-saving control method and system for textile manufacturing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133372A (en) * 2014-07-09 2014-11-05 河海大学常州校区 Room temperature control algorithm based on fuzzy neural network
CN105862136A (en) * 2015-01-20 2016-08-17 中国农业机械化科学研究院 An automatic control method and device for a cotton seed delinter
CN109526381A (en) * 2019-01-07 2019-03-29 中国农业大学 A kind of low damage threshing control system of maize harvesting machine and method
CN111670680A (en) * 2020-07-16 2020-09-18 中国农业大学 High-moisture-content corn harvesting roller rotating speed control system and control method
CN112558473A (en) * 2020-11-27 2021-03-26 中国农业机械化科学研究院 Working parameter correction method of silage harvester based on recurrent neural network algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133372A (en) * 2014-07-09 2014-11-05 河海大学常州校区 Room temperature control algorithm based on fuzzy neural network
CN105862136A (en) * 2015-01-20 2016-08-17 中国农业机械化科学研究院 An automatic control method and device for a cotton seed delinter
CN109526381A (en) * 2019-01-07 2019-03-29 中国农业大学 A kind of low damage threshing control system of maize harvesting machine and method
CN111670680A (en) * 2020-07-16 2020-09-18 中国农业大学 High-moisture-content corn harvesting roller rotating speed control system and control method
CN112558473A (en) * 2020-11-27 2021-03-26 中国农业机械化科学研究院 Working parameter correction method of silage harvester based on recurrent neural network algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵建波: "联合收割机负荷反馈智能控制系统的研究", 《中国优秀硕士学位论文全文数据库农业科技辑》, 15 July 2010 (2010-07-15), pages 51 - 56 *
陈永录 姜境: "农机作业故障200解", 黑龙江人民出版社, pages: 320 - 326 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004339A (en) * 2021-11-12 2022-02-01 广东海洋大学 Width learning-based urban lighting system adjusting method and device and storage medium
CN114004339B (en) * 2021-11-12 2024-05-03 广东海洋大学 Urban lighting system adjusting method and device based on width learning and storage medium
CN118625682A (en) * 2024-08-15 2024-09-10 绍兴达伽马纺织有限公司 Energy-saving control method and system for textile manufacturing
CN118625682B (en) * 2024-08-15 2024-10-22 绍兴达伽马纺织有限公司 Energy-saving control method and system for textile manufacturing

Similar Documents

Publication Publication Date Title
Chen et al. Hybrid particle swarm optimization with spiral-shaped mechanism for feature selection
CN110514206B (en) Unmanned aerial vehicle flight path prediction method based on deep learning
Almeida et al. A multi-objective memetic and hybrid methodology for optimizing the parameters and performance of artificial neural networks
Qasem et al. Memetic multiobjective particle swarm optimization-based radial basis function network for classification problems
CN104133372B (en) Room temperature control algolithm based on fuzzy neural network
Cho et al. Exponentially increasing the capacity-to-computation ratio for conditional computation in deep learning
Ramchoun et al. New modeling of multilayer perceptron architecture optimization with regularization: an application to pattern classification
Ibrahim et al. An improved runner-root algorithm for solving feature selection problems based on rough sets and neighborhood rough sets
Serban et al. The bottleneck simulator: A model-based deep reinforcement learning approach
Pishkenari et al. Optimum synthesis of fuzzy logic controller for trajectory tracking by differential evolution
Aoun et al. Hidden markov model classifier for the adaptive particle swarm optimization
Suresh et al. A sequential learning algorithm for meta-cognitive neuro-fuzzy inference system for classification problems
Majhi et al. Oppositional Crow Search Algorithm with mutation operator for global optimization and application in designing FOPID controller
Mondal et al. A survey of reinforcement learning techniques: strategies, recent development, and future directions
CN116992151A (en) Online course recommendation method based on double-tower graph convolution neural network
CN113625560A (en) Loss rate control method and device for corn harvester, storage medium and equipment
Shi et al. Efficient hierarchical policy network with fuzzy rules
Worasucheep Forecasting currency exchange rates with an Artificial Bee Colony-optimized neural network
CN116569177A (en) Weight-based modulation in neural networks
Fofanah et al. Experimental Exploration of Evolutionary Algorithms and their Applications in Complex Problems: Genetic Algorithm and Particle Swarm Optimization Algorithm
Nayak et al. A pi-sigma higher order neural network for stock index forecasting
Ribeiro et al. Multi-criteria Decision-Making Techniques for the Selection of Pareto-optimal Machine Learning Models in a Drinking-Water Quality Monitoring Problem
Zamfirache et al. Adaptive reinforcement learning-based control using proximal policy optimization and slime mould algorithm with experimental tower crane system validation
Chen et al. Dynamic parameter optimization of evolutionary computation for on-line prediction of time series with changing dynamics
CN117273125A (en) Multi-model online self-adaptive preferential technology driven evolution algorithm based on reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211109