CN112651456B - Unmanned vehicle control method based on RBF neural network - Google Patents

Unmanned vehicle control method based on RBF neural network Download PDF

Info

Publication number
CN112651456B
CN112651456B CN202011618750.7A CN202011618750A CN112651456B CN 112651456 B CN112651456 B CN 112651456B CN 202011618750 A CN202011618750 A CN 202011618750A CN 112651456 B CN112651456 B CN 112651456B
Authority
CN
China
Prior art keywords
data
neural network
sensor
training
rbf neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011618750.7A
Other languages
Chinese (zh)
Other versions
CN112651456A (en
Inventor
敖邦乾
梁定勇
敖帮桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zunyi Normal University
Original Assignee
Zunyi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zunyi Normal University filed Critical Zunyi Normal University
Priority to CN202011618750.7A priority Critical patent/CN112651456B/en
Publication of CN112651456A publication Critical patent/CN112651456A/en
Application granted granted Critical
Publication of CN112651456B publication Critical patent/CN112651456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Neurology (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of vehicle control, in particular to an unmanned vehicle control method based on an RBF neural network, which comprises the following steps: s100: obtaining obstacle data and preprocessing; s200: establishing a control model based on an RBF neural network model; s300: constructing a sample training sample set, and training a control model; s400: and inputting the preprocessed obstacle data into a control model for processing, and outputting control parameters. According to the unmanned vehicle control method based on the RBF neural network, the control quantity of the speed and the angle can be generated according to the obstacle distance and the angle information, so that intelligent control is realized; the processing logic and complexity of sensor data in the obstacle avoidance control process can be simplified; when the sensor is changed, other changes of the algorithm of the control logic are not needed, so that the universality is strong, and the expansion and maintenance are easy.

Description

Unmanned vehicle control method based on RBF neural network
Technical Field
The invention relates to the technical field of vehicle control, in particular to an unmanned vehicle control method based on an RBF neural network.
Background
With the development of the Internet of things and Internet technology, intelligent robots or intelligent vehicles are widely applied to scenes such as exhibition hall navigation, welcome question answering, workshop management, warehouse management, freight logistics, intelligent home and the like.
The movement control is one of core technologies of intelligent vehicles, and in the prior art, the movement and the path of the intelligent vehicles are controlled mainly by means of fixed lines or mark recognition and the like arranged in a scene. For application environments of non-preset scenes, the vehicle is mainly moved based on an obstacle avoidance algorithm, the distribution of obstacles around the vehicle is detected through sensors, whether the movement direction is to be adjusted is judged according to sensor data of each sensor, the traditional obstacle avoidance algorithm is a passive obstacle avoidance method, namely the obstacle and the vehicle are processed only after being smaller than a certain threshold value, the obstacle larger than the threshold value is ignored, the response is slower, and when the environment changes faster, the vehicle cannot respond timely, and accidents are caused.
Disclosure of Invention
The invention aims to provide the unmanned vehicle control method based on the RBF neural network, which can fully utilize the environmental data through the neural network model, analyze and output the walking control parameters based on the environmental data, has high response speed and is suitable for scenes with high environmental change and high vehicle speed.
The application provides the following technical scheme:
an unmanned vehicle control method based on RBF neural network comprises the following contents:
s100: obtaining obstacle data and preprocessing;
s200: establishing a control model based on an RBF neural network model;
s300: constructing a sample training sample set, and training a control model;
s400: and inputting the preprocessed obstacle data into a control model for processing, and outputting control parameters.
Further, the preprocessing in S100 includes:
s101: acquiring sensor data of each sensor;
s102: carrying out filtering processing on sensor data of each sensor, wherein the filtering processing adopts a Kalman filtering algorithm;
s103: and carrying out data fusion on the sensor data of each sensor to obtain barrier data.
Further, in S200, the control model includes an input layer, an hidden layer, and an output layer, where the number of neurons in the input layer corresponds to the number of sensors; the hidden layer adopts a Gaussian radial basis function as an activation function; the output layer includes two neurons that output control target amounts for the vehicle speed and the angular velocity, respectively.
Further, S300 includes:
s301: initializing neural network parameters, and configuring learning rate and iteration precision;
s302: calculating the value of the root mean square error output by the network, ending training if the value of the root mean square error is smaller than or equal to the iteration precision, otherwise executing S303;
s303: the weight parameters, center parameters, and width parameters of the neural network model are iteratively trained using a gradient descent method, and then S302 is performed.
Further, in S303, the weight parameter, the center parameter, and the width parameter are adjusted according to the following formula:
wherein omega ji () The weight parameter of the jth output layer neuron and the ith hidden layer neuron in the t-th iterative computation is obtained; c ik () Central parameters of the ith hidden layer neuron on the kth input layer neuron in the t-th iteration; d, d ik Is with the center c ik () Corresponding width parameters; η is a learning factor;
i is an integer and i is more than or equal to 1 and less than or equal to n i ,n i Is the number of hidden layer neurons; j=1, 2; k is an integer and k is more than or equal to 1 and less than or equal to n k ,n k The number of neurons being the input layer; 0<η<1;
E is the cost function of the RBF neural network,O ij the expected value of the j-th output layer neuron when the i-th hidden layer neuron inputs a sample; y is ij The output value of the jth output neuron at the time of inputting the sample to the ith hidden layer neuron.
Further, the unmanned aerial vehicle includes two drive wheels, the unmanned aerial vehicle controls the steering angle through the speed difference of two drive wheels, still includes:
s500: acquiring the speed and the angular speed of the current vehicle;
s600: the two driving wheels are controlled according to the output vehicle speed of the output layer, the target amount of the angular velocity, the current vehicle speed and the angular velocity.
Further, the method further comprises the following steps:
s700: recording sensor data of each sensor and corresponding vehicle speed and angular speed to form a data set;
s800: screening abnormal data in the data set according to the data screening rule;
s900: correcting the abnormal data, and constructing a corrected data set according to the abnormal data correction result;
s1000: and performing iterative training on the control model through the corrected data set.
The technical scheme of the invention has the beneficial effects that:
according to the technical scheme, the control model based on the RBF neural network model is adopted to analyze the obstacle data, and the control quantity of speed and angle can be generated according to the obstacle distance and angle information, so that intelligent control is realized. The environment data can be fully utilized through the neural network model, the walking control parameters can be analyzed and output based on the environment data, the response speed is high, and the method is suitable for scenes with high environment change and high vehicle speed.
According to the technical scheme, the sensor data of each sensor are used as input, and the control result is output through the control model, so that the processing logic and complexity of the sensor data in the obstacle avoidance control process can be simplified; when the sensor is changed, such as adding the sensor or removing the sensor, the number of neurons of the input layer is adjusted to train again, other changes of the algorithm of the control logic are not needed, the development and maintenance are easy, the universality is strong, and the sensor can be applied to intelligent vehicles with different hardware.
According to the technical scheme, the sensor data, the vehicle speed and the angular speed data in the running process are recorded, the correction data set is constructed by screening and correcting the abnormal data, and the control model is subjected to iterative training through the correction data set, so that the control model can be continuously iterated in the using process, and further, the control is more accurate.
Drawings
FIG. 1 is a control model block diagram in an embodiment of an unmanned vehicle control method based on RBF neural network of the present application;
fig. 2 is a schematic diagram of simulation operation in an embodiment of an unmanned vehicle control method based on an RBF neural network in the present application.
Detailed Description
The technical scheme of the application is further described in detail through the following specific embodiments:
example 1
The unmanned vehicle control method based on RBF neural network, disclosed in the embodiment, comprises the following contents:
s100: sensor data is acquired and preprocessed.
S200: and establishing a control model based on the RBF neural network model.
S300: and constructing a sample training sample set, and training a control model.
S400: and inputting the preprocessed data into a control model for processing, and outputting control parameters.
In the embodiment, the unmanned vehicle comprises a vehicle body, a chassis of the vehicle body is provided with a left driving wheel and a right driving wheel, and the steering angle of the unmanned vehicle is controlled by the speed difference of the two driving wheels, so that the driving wheels are used for driving and steering; a laser radar is provided as a main sensor on the vehicle body. Five laser radars are arranged in total, the angle difference between every two laser radars is 45 degrees, and the obstacle distances in the five directions of the right left direction, the left front direction, the right front direction and the right front direction of the trolley are detected respectively. The vehicle body is also provided with 32-bit ARM core processor, motor, GPS positioning module and other circuit modules or devices.
In this embodiment, the preprocessing in S100 includes:
s101: acquiring sensor data of each sensor, namely acquiring sensor data of five laser radar sensors;
s102: the sensor data of each sensor is subjected to filtering treatment, and the Kalman filtering algorithm is adopted to perform filtering treatment so as to eliminate illumination influence and Gaussian noise influence;
s103: and carrying out data fusion on the sensor data of each sensor.
In S200, as shown in fig. 1, the control model includes an input layer, an hidden layer, and an output layer, where the number of neurons in the input layer corresponds to the number of sensors; in the application, the input layer consists of five neurons, data detected by five laser radars are used as signal source nodes, and the transmitted information is the distance and angle of the environmental obstacle.
The hidden layer adopts Gaussian radial basis functionAs an activation function; is composed of six neurons for performing nonlinear variation on the input laser radar data.
The output layer comprises two neurons which are used for carrying out linear weighted output on the information output by the hidden layer neurons, and the two neurons respectively output control amounts for the vehicle speed and the angular speed.
S300 includes:
s301: initializing neural network parameters, and configuring learning rate eta and iteration precision epsilon;
the initialization process is as follows:
a. determining an input vector X: x= [ X ] 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ] T
b. Determining an output vector Y and a desired output vector O: y= [ Y ] 1 ,y 2 ] T ,O=[o 1 ,o 2 ] T
c. Initializing weights from an implicit layer to an output layer: w (W) ij =[ω i1i2 ] T ,(i=1,2,3,4,5,6);
d. Initializing central parameters of neurons of an hidden layer: c (C) k =[c i1 ,c i2 ,c i3 ,c i4 ,c i5 ,c i6 ];
e. Initializing a width vector: d (D) k =[d k1 ,d k2 ,d k3 ,d k4 ,d k5 ,d k6 ]。
Calculating the output value of each neuron of the hidden layer after the initialization is finished, and calculating the output of each neuron of the output layer;
s302: calculating the value of the root mean square error (RMS) output by the network, ending training if the value of the root mean square error is smaller than or equal to iteration precision, namely RMS is smaller than or equal to epsilon, otherwise executing S303;
wherein: o (O) ij The expected value of the jth output neuron in the ith input sample; y is ij Is the network output value of the jth output neuron at the ith input sample.
S303: the weight parameters, center parameters, and width parameters of the neural network model are iteratively trained using a gradient descent method, and then S302 is performed.
In S303, the weight parameter, the center parameter, and the width parameter are adjusted according to the following formula:
wherein omega ji () The weight parameter of the jth output layer neuron and the ith hidden layer neuron in the t-th iterative computation is obtained; c ik () Central parameters of the ith hidden layer neuron on the kth input layer neuron in the t-th iteration; d, d ik Is with the center c ik () Corresponding width parameters; η is a learning factor;
i is an integer and i is more than or equal to 1 and less than or equal to n i ,n i Is the number of hidden layer neurons; j=1, 2; k is an integer and k is more than or equal to 1 and less than or equal to n k ,n k The number of neurons being the input layer; 0<η<1, a step of; in the present embodiment, n i =6,i=1,2,3,4,5,6;n k =5,k=1,2,3,4,5。
E is the cost function of the RBF neural network,O ij the expected value of the j-th output layer neuron when the i-th hidden layer neuron inputs a sample; y is ij The output value of the jth output neuron at the time of inputting the sample to the ith hidden layer neuron.
When the technical scheme of the embodiment operates, as shown in fig. 2, the distance of the obstacle is obtained through five laser radar sensors, a control model based on an RBF neural network model is adopted, sensor data of the laser radar sensors are input, control amounts of speed and angle can be generated according to the distance and angle information of the obstacle, intelligent control is further realized, and processing logic and complexity of the sensor data in the obstacle avoidance control process can be simplified.
Example two
The present embodiment differs from the first embodiment in that in the present embodiment, two neurons output control target amounts for the vehicle speed and the angular velocity, respectively, and further includes:
s500: acquiring the speed and the angular speed of the current vehicle;
s600: the two driving wheels are controlled according to the target amounts of the output vehicle speed and angular velocity of the output layer and the current vehicle speed and angular velocity.
Example III
The difference between the present embodiment and the second embodiment is that in this embodiment, the method further includes:
s700: recording sensor data of each sensor and corresponding vehicle speed and angular speed to form a data set;
s800: screening abnormal data in the data set according to the data screening rule;
s900: correcting the abnormal data, and constructing a corrected data set according to the abnormal data correction result;
s1000: and performing iterative training on the control model through the corrected data set.
In the technical scheme of the embodiment, the inaccurate place of the current control model can be judged through screening of abnormal data, and further iterative training is carried out through constructing a correction data set, so that training accuracy is improved.
Example IV
The difference between this embodiment and the third embodiment is that in this embodiment, the method further includes:
s1100: constructing a sensor data inference model based on the LSTM neural network model;
s1200: constructing a training data set to train a sensor data inference model;
s1300: inputting data of an existing data set and a sample training sample set into a sensor data inference model, and simultaneously inputting the position and the number of sensors to be predicted;
s1400: the sensor data inference model predicts sensor data corresponding to other sensor positions according to the data of the existing data set sample training sample set;
s1500: constructing data training sets corresponding to the number of other sensors according to the prediction result and the existing data sets and sample training sample sets;
s1600: and training the control model according to the data training set obtained in the step S1500.
According to the technical scheme, the sensor data inference model is built, data corresponding to other sensor positions can be pushed on the basis of the existing data set sample training sample set, training sets and control models of different numbers of sensors are built, for example, training data sets of five sensors are existing, the training sets and control models of six, seven and the like numbers of sensors can be generated through the scheme of the embodiment, and then the control model can be directly used after the sensors are added, the data are not required to be collected again to build the data sets, the workload is reduced, and the efficiency is improved.
Example five
The difference between the present embodiment and the third embodiment is that in the present embodiment, a high-precision control model and a low-precision control model are set, and the number of analysis dimensions, that is, the number of input layer neurons and hidden layer neurons, involved in the high-precision control model and the low-precision control model is different, and the method further includes:
s1700: controlling driving through a low-precision control model, and storing current driving path data by a storage module;
s1800: carrying out optimization analysis on the path data through a high-precision control model to generate an optimized driving path;
s1900: when the situation that the vehicle runs on the same path again is detected, the optimized driving path is called, and driving control is carried out according to the optimized driving path.
In this embodiment, the high-precision control model may be stored in a server or may be set in a vehicle control system, and by using the low-precision control model and the high-precision control model in sequence, the vehicle processing overhead may be reduced, but the paths of multiple driving may be optimized to ensure that the optimal path is achieved.
The foregoing is merely an embodiment of the present invention, the present invention is not limited to the field of this embodiment, and the specific structures and features well known in the schemes are not described in any way herein, so that those skilled in the art will know all the prior art in the field before the application date or priority date, and will have the capability of applying the conventional experimental means before the date, and those skilled in the art may, in light of the teaching of this application, complete and implement this scheme in combination with their own capabilities, and some typical known structures or known methods should not be an obstacle for those skilled in the art to practice this application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (7)

1. An unmanned vehicle control method based on RBF neural network is characterized in that: the method comprises the following steps:
s100: obtaining obstacle data and preprocessing;
s200: establishing a control model based on an RBF neural network model;
s300: constructing a sample training sample set, and training a control model;
s400: inputting the preprocessed barrier data into a control model for processing, and outputting control parameters;
s700: recording sensor data of each sensor and corresponding vehicle speed and angular speed to form a data set;
s1100: constructing a sensor data inference model based on the LSTM neural network model;
s1200: constructing a training data set to train a sensor data inference model;
s1300: inputting data of an existing data set and a sample training sample set into a sensor data inference model, and simultaneously inputting the position and the number of sensors to be predicted; wherein the training data set is a sample training sample set;
s1400: the sensor data inference model predicts sensor data corresponding to other sensor positions according to the existing data set and the data of the sample training sample set;
s1500: constructing data training sets corresponding to the number of other sensors according to the prediction result and the existing data sets and sample training sample sets;
s1600: and training the control model according to the data training set obtained in the step S1500.
2. The RBF neural network-based unmanned vehicle control method of claim 1, wherein: the preprocessing in S100 includes:
s101: acquiring sensor data of each sensor;
s102: carrying out filtering processing on sensor data of each sensor, wherein the filtering processing adopts a Kalman filtering algorithm;
s103: and carrying out data fusion on the sensor data of each sensor to obtain barrier data.
3. The RBF neural network-based unmanned vehicle control method of claim 2, wherein: in the step S200, the control model includes an input layer, an implicit layer, and an output layer, where the number of neurons in the input layer corresponds to the number of sensors; the hidden layer adopts a Gaussian radial basis function as an activation function; the output layer includes two neurons that output control target amounts for the vehicle speed and the angular velocity, respectively.
4. The RBF neural network-based unmanned vehicle control method of claim 3, wherein: s300 includes:
s301: initializing neural network parameters, and configuring learning rate and iteration precision;
s302: calculating the value of the root mean square error output by the network, ending training if the value of the root mean square error is smaller than or equal to the iteration precision, otherwise executing S303;
s303: the weight parameters, center parameters, and width parameters of the neural network model are iteratively trained using a gradient descent method, and then S302 is performed.
5. The RBF neural network-based unmanned vehicle control method of claim 4, wherein: in S303, the weight parameter, the center parameter, and the width parameter are adjusted according to the following formula:
wherein omega ji (t) is a weight parameter between the jth output layer neuron and the ith hidden layer neuron in the t-th iterative computation; c τk (t) is the central parameter of the ith hidden layer neuron versus the kth input layer neuron at the t-th iteration; d, d τk Is with the center c tk (t) a corresponding width parameter; η is a learning factor;
i is an integer and i is more than or equal to 1 and less than or equal to n i ,n i Is the number of hidden layer neurons; j=1, 2; k is an integer and k is more than or equal to 1 and less than or equal to n k ,n k The number of neurons being the input layer; 0 < eta < 1;
e is the cost function of the RBF neural network,Q ij the expected value of the j-th output layer neuron when the i-th hidden layer neuron inputs a sample; y is ij The output value of the jth output neuron at the time of inputting the sample to the ith hidden layer neuron.
6. The RBF neural network-based unmanned vehicle control method of claim 5, wherein: the unmanned aerial vehicle includes two drive wheels, the unmanned aerial vehicle is through the speed differential control steering angle of two drive wheels, still includes:
s500: acquiring the speed and the angular speed of the current vehicle;
s600: the two driving wheels are controlled according to the output vehicle speed of the output layer, the target amount of the angular velocity, the current vehicle speed and the angular velocity.
7. The RBF neural network-based unmanned vehicle control method of claim 6, wherein: further comprises:
s800: screening abnormal data in the data set according to the data screening rule;
s900: correcting the abnormal data, and constructing a corrected data set according to the abnormal data correction result; s1000: and performing iterative training on the control model through the corrected data set.
CN202011618750.7A 2020-12-31 2020-12-31 Unmanned vehicle control method based on RBF neural network Active CN112651456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011618750.7A CN112651456B (en) 2020-12-31 2020-12-31 Unmanned vehicle control method based on RBF neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011618750.7A CN112651456B (en) 2020-12-31 2020-12-31 Unmanned vehicle control method based on RBF neural network

Publications (2)

Publication Number Publication Date
CN112651456A CN112651456A (en) 2021-04-13
CN112651456B true CN112651456B (en) 2023-08-08

Family

ID=75366629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011618750.7A Active CN112651456B (en) 2020-12-31 2020-12-31 Unmanned vehicle control method based on RBF neural network

Country Status (1)

Country Link
CN (1) CN112651456B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113219130B (en) * 2021-04-16 2022-08-02 中国农业大学 Calibration method and test platform of multi-parameter gas sensor
CN114894289B (en) * 2022-06-20 2024-02-02 江苏省计量科学研究院(江苏省能源计量数据中心) Large-mass comparator based on data fusion algorithm
US20240092397A1 (en) * 2022-08-26 2024-03-21 Zoox, Inc. Interpretable kalman filter comprising neural network component(s) for autonomous vehicles

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761626A (en) * 1995-12-26 1998-06-02 Ford Global Technologies, Inc. System and method for distinguishing and characterizing motor vehicles for control of automatic drivers
CN103454919A (en) * 2013-08-19 2013-12-18 江苏科技大学 Motion control system and method of mobile robot in intelligent space
CN106054893A (en) * 2016-06-30 2016-10-26 江汉大学 Intelligent vehicle control system and method
CN107121924A (en) * 2017-03-03 2017-09-01 中国农业大学 A kind of Visual Environment regulator control system and method based on RBF neural
CN108427417A (en) * 2018-03-30 2018-08-21 北京图森未来科技有限公司 Automatic driving control system and method, computer server and automatic driving vehicle
CN108594788A (en) * 2018-03-27 2018-09-28 西北工业大学 A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm
CN208393354U (en) * 2018-06-22 2019-01-18 南京航空航天大学 Line operating condition automatic Pilot steering system is moved based on BP neural network and safe distance
CN109447164A (en) * 2018-11-01 2019-03-08 厦门大学 A kind of motor behavior method for classifying modes, system and device
CN110509916A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of body gesture antihunt means and system based on deep neural network
CN110716498A (en) * 2019-10-30 2020-01-21 北京航天发射技术研究所 Sensor control method and device for vehicle-mounted erecting frame
CN110782033A (en) * 2019-10-28 2020-02-11 玲睿(上海)医疗科技有限公司 AGV positioning method based on fuzzy neural network
CN110782481A (en) * 2019-10-18 2020-02-11 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Unmanned ship intelligent decision method and system
CN111542836A (en) * 2017-10-04 2020-08-14 华为技术有限公司 Method for selecting action for object by using neural network
CN111624522A (en) * 2020-05-29 2020-09-04 上海海事大学 Ant colony optimization-based RBF neural network control transformer fault diagnosis method
CN111753371A (en) * 2020-06-04 2020-10-09 纵目科技(上海)股份有限公司 Training method, system, terminal and storage medium for vehicle body control network model
CN111886170A (en) * 2018-03-28 2020-11-03 日立汽车系统株式会社 Vehicle control device
CN111967087A (en) * 2020-07-16 2020-11-20 山东派蒙机电技术有限公司 Neural network-based online vehicle decision control model establishing and evaluating method
CN112084709A (en) * 2020-09-04 2020-12-15 西安交通大学 Large-scale generator insulation state evaluation method based on genetic algorithm and radial basis function neural network
CN112109705A (en) * 2020-09-23 2020-12-22 同济大学 Collision avoidance optimization control system and method for extended-range distributed driving electric vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004030782A1 (en) * 2004-06-25 2006-01-19 Fev Motorentechnik Gmbh Vehicle control unit with a neural network
KR102521657B1 (en) * 2018-10-15 2023-04-14 삼성전자주식회사 Method and apparatus of controlling vehicel

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761626A (en) * 1995-12-26 1998-06-02 Ford Global Technologies, Inc. System and method for distinguishing and characterizing motor vehicles for control of automatic drivers
CN103454919A (en) * 2013-08-19 2013-12-18 江苏科技大学 Motion control system and method of mobile robot in intelligent space
CN106054893A (en) * 2016-06-30 2016-10-26 江汉大学 Intelligent vehicle control system and method
CN107121924A (en) * 2017-03-03 2017-09-01 中国农业大学 A kind of Visual Environment regulator control system and method based on RBF neural
CN111542836A (en) * 2017-10-04 2020-08-14 华为技术有限公司 Method for selecting action for object by using neural network
CN108594788A (en) * 2018-03-27 2018-09-28 西北工业大学 A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm
CN111886170A (en) * 2018-03-28 2020-11-03 日立汽车系统株式会社 Vehicle control device
CN108427417A (en) * 2018-03-30 2018-08-21 北京图森未来科技有限公司 Automatic driving control system and method, computer server and automatic driving vehicle
CN208393354U (en) * 2018-06-22 2019-01-18 南京航空航天大学 Line operating condition automatic Pilot steering system is moved based on BP neural network and safe distance
CN109447164A (en) * 2018-11-01 2019-03-08 厦门大学 A kind of motor behavior method for classifying modes, system and device
CN110509916A (en) * 2019-08-30 2019-11-29 的卢技术有限公司 A kind of body gesture antihunt means and system based on deep neural network
CN110782481A (en) * 2019-10-18 2020-02-11 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) Unmanned ship intelligent decision method and system
CN110782033A (en) * 2019-10-28 2020-02-11 玲睿(上海)医疗科技有限公司 AGV positioning method based on fuzzy neural network
CN110716498A (en) * 2019-10-30 2020-01-21 北京航天发射技术研究所 Sensor control method and device for vehicle-mounted erecting frame
CN111624522A (en) * 2020-05-29 2020-09-04 上海海事大学 Ant colony optimization-based RBF neural network control transformer fault diagnosis method
CN111753371A (en) * 2020-06-04 2020-10-09 纵目科技(上海)股份有限公司 Training method, system, terminal and storage medium for vehicle body control network model
CN111967087A (en) * 2020-07-16 2020-11-20 山东派蒙机电技术有限公司 Neural network-based online vehicle decision control model establishing and evaluating method
CN112084709A (en) * 2020-09-04 2020-12-15 西安交通大学 Large-scale generator insulation state evaluation method based on genetic algorithm and radial basis function neural network
CN112109705A (en) * 2020-09-23 2020-12-22 同济大学 Collision avoidance optimization control system and method for extended-range distributed driving electric vehicle

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李超群.基于深度学习的智能汽车自动转向技术研究.《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》.2019,(第7期),第C035-118页. *

Also Published As

Publication number Publication date
CN112651456A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN112651456B (en) Unmanned vehicle control method based on RBF neural network
CN109866752B (en) Method for tracking running system of dual-mode parallel vehicle track based on predictive control
CN109885883B (en) Unmanned vehicle transverse motion control method based on GK clustering algorithm model prediction
Gao et al. Robust lateral trajectory following control of unmanned vehicle based on model predictive control
CN112356830B (en) Intelligent parking method based on model reinforcement learning
Alcala et al. Gain‐scheduling LPV control for autonomous vehicles including friction force estimation and compensation mechanism
Min et al. RNN-based path prediction of obstacle vehicles with deep ensemble
Betz et al. A software architecture for the dynamic path planning of an autonomous racecar at the limits of handling
CN111103798B (en) AGV path tracking method based on inversion sliding mode control
Sun et al. Safe and smooth motion planning for Mecanum-Wheeled robot using improved RRT and cubic spline
Wu et al. Route planning and tracking control of an intelligent automatic unmanned transportation system based on dynamic nonlinear model predictive control
Hegedüs et al. Motion planning for highly automated road vehicles with a hybrid approach using nonlinear optimization and artificial neural networks
CN111257853B (en) Automatic driving system laser radar online calibration method based on IMU pre-integration
CN116337045A (en) High-speed map building navigation method based on karto and teb
Azam et al. N 2 C: neural network controller design using behavioral cloning
Rasib et al. Are Self‐Driving Vehicles Ready to Launch? An Insight into Steering Control in Autonomous Self‐Driving Vehicles
Farag Real‐time NMPC path tracker for autonomous vehicles
Tian et al. Personalized lane change planning and control by imitation learning from drivers
Sebastian et al. Neural network based heterogeneous sensor fusion for robot motion planning
CN113184040A (en) Unmanned vehicle line-controlled steering control method and system based on steering intention of driver
Wang et al. Extraction of preview elevation information based on terrain mapping and trajectory prediction in real-time
Lan et al. Trajectory tracking system of wheeled robot based on immune algorithm and sliding mode variable structure
Quan et al. Neural Network-Based Indoor Autonomously-Navigated AGV Motion Trajectory Data Fusion
CN113959446B (en) Autonomous logistics transportation navigation method for robot based on neural network
Xu et al. Actor–critic reinforcement learning for autonomous control of unmanned ground vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant