CN112651456A - Unmanned vehicle control method based on RBF neural network - Google Patents
Unmanned vehicle control method based on RBF neural network Download PDFInfo
- Publication number
- CN112651456A CN112651456A CN202011618750.7A CN202011618750A CN112651456A CN 112651456 A CN112651456 A CN 112651456A CN 202011618750 A CN202011618750 A CN 202011618750A CN 112651456 A CN112651456 A CN 112651456A
- Authority
- CN
- China
- Prior art keywords
- data
- neural network
- rbf neural
- control
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Neurology (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention relates to the technical field of vehicle control, in particular to an unmanned vehicle control method based on a RBF neural network, which comprises the following steps: s100: acquiring barrier data and preprocessing the barrier data; s200: establishing a control model based on an RBF neural network model; s300: constructing a sample training sample set, and training a control model; s400: and inputting the preprocessed barrier data into a control model for processing, and outputting control parameters. According to the unmanned vehicle control method based on the RBF neural network, the control quantity of speed and angle can be generated according to the distance and angle information of the obstacle, and intelligent control is further realized; the processing logic and complexity of the sensor data in the obstacle avoidance control process can be simplified; when the sensor changes, other changes to the algorithm of the control logic are not needed, the universality is strong, and the expansion and maintenance are easy.
Description
Technical Field
The invention relates to the technical field of vehicle control, in particular to an unmanned vehicle control method based on a RBF neural network.
Background
With the development of the internet of things and internet technology, intelligent robots or intelligent vehicles are widely applied to scenes such as exhibition hall navigation, greeting and answering, workshop management, warehousing management, freight logistics, intelligent home and the like.
The mobile control is one of the core technologies of the intelligent vehicle, and in the prior art, the movement and the path of the intelligent vehicle are mainly controlled by setting a mark in a fixed line or a scene for identification and the like. The method mainly comprises the steps of moving based on an obstacle avoidance algorithm aiming at an application environment without a preset scene, detecting the distribution of obstacles around a vehicle through sensors, and judging whether the moving direction needs to be adjusted according to data of each sensor.
Disclosure of Invention
The invention aims to provide an unmanned vehicle control method based on a RBF neural network, which can fully utilize environmental data through a neural network model, analyze and output walking control parameters based on the environmental data, has high response speed and is suitable for scenes with fast environmental change and fast vehicle speed.
The application provides the following technical scheme:
an unmanned vehicle control method based on an RBF neural network comprises the following steps:
s100: acquiring barrier data and preprocessing the barrier data;
s200: establishing a control model based on an RBF neural network model;
s300: constructing a sample training sample set, and training a control model;
s400: and inputting the preprocessed barrier data into a control model for processing, and outputting control parameters.
Further, the preprocessing in S100 includes:
s101: acquiring data of each sensor;
s102: carrying out filtering processing on data of each sensor, wherein the filtering processing adopts a Kalman filtering algorithm;
s103: and carrying out data fusion on the data of each sensor to obtain the data of the obstacle.
Further, in S200, the control model includes an input layer, a hidden layer, and an output layer, where the number of neurons in the input layer corresponds to the number of sensors; the hidden layer adopts a Gaussian radial basis function as an activation function; the output layer includes two neurons that output control target amounts for the vehicle speed and the angular speed, respectively.
Further, S300 includes:
s301: initializing neural network parameters, and configuring learning rate and iteration precision;
s302: calculating a root mean square error value output by the network, if the root mean square error value is less than or equal to the iteration precision, ending the training, otherwise executing S303;
s303: the weight parameter, the center parameter, and the width parameter of the neural network model are iteratively trained using a gradient descent method, and then S302 is performed.
Further, in S303, the weight parameter, the center parameter, and the width parameter are adjusted according to the following formulas:
wherein, ω isji(t) weight parameters between the jth output layer neuron and the ith hidden layer neuron at the time of the t iterative computation; c. Cik(t) central parameters for the ith hidden layer neuron to the kth input layer neuron at the tth iteration; dikIs and center cik(t) width corresponding toA parameter; eta is a learning factor;
i is an integer of 1 to ni,niNumber of hidden layer neurons; j is 1, 2; k is an integer of 1 to nk,nkThe number of neurons in the input layer; 0<η<1;
E is the cost function of the RBF neural network,Oijfor the j output layer neuron's expected value at the i hidden layer neuron input sample; y isijThe output value of the jth output neuron when the sample is input to the ith hidden layer neuron.
Further, unmanned car includes two drive wheels, unmanned car controls through the speed difference of two drive wheels and turns to the angle, still includes:
s500: acquiring the speed and angular speed of a current vehicle;
s600: the control of the two drive wheels is performed based on the vehicle speed, the target amount of angular velocity, the current vehicle speed, and the angular velocity of the output layer.
Further, still include:
s700: recording data of each sensor and corresponding vehicle speed and angular velocity to form a data set;
s800: screening abnormal data in the data set according to a data screening rule;
s900: correcting the abnormal data, and constructing a corrected data set according to the abnormal data correction result;
s1000: and performing iterative training on the control model by correcting the data set.
The technical scheme of the invention has the beneficial effects that:
in the technical scheme of the invention, the barrier data are analyzed by adopting the control model based on the RBF neural network model, and the control quantity of speed and angle can be generated according to the distance and angle information of the barrier, so that intelligent control is realized. The environment data can be fully utilized through the neural network model, the walking control parameters are analyzed and output based on the environment data, the response speed is high, and the method is suitable for scenes with fast environment change and fast vehicle speed.
In the technical scheme of the invention, the data of each sensor is used as input, and the control result is output through the control model, so that the processing logic and complexity of the sensor data in the obstacle avoidance control process can be simplified; when the sensor changes, such as when the sensor is added or removed, the number of neurons in the input layer is adjusted to train again, other changes to the algorithm of the control logic are not needed, the expansion and maintenance are easy, the universality is strong, and the method can be applied to intelligent vehicles with different hardware.
According to the technical scheme, sensor data and vehicle speed and angular speed data in the operation process are recorded, a correction data set is constructed by screening and correcting abnormal data, and the control model is subjected to iterative training through the correction data set, so that the control model can be continuously iterated in the use process, and further the control is more accurate.
Drawings
FIG. 1 is a control model structure diagram in an embodiment of the unmanned vehicle control method based on an RBF neural network;
fig. 2 is a schematic simulation operation diagram in the embodiment of the unmanned vehicle control method based on the RBF neural network.
Detailed Description
The technical scheme of the application is further explained in detail through the following specific implementation modes:
example one
The unmanned vehicle control method based on the RBF neural network disclosed by the embodiment comprises the following steps:
s100: sensor data is acquired and pre-processed.
S200: and establishing a control model based on the RBF neural network model.
S300: and constructing a sample training sample set, and training the control model.
S400: and inputting the preprocessed data into a control model for processing, and outputting control parameters.
In the embodiment, the unmanned vehicle comprises a vehicle body, a chassis of the vehicle body is provided with a left driving wheel and a right driving wheel, and the unmanned vehicle controls the steering angle through the speed difference of the two driving wheels, so that the driving wheels are used for driving and steering; on the vehicle body, a laser radar is arranged as a main sensor. The device is provided with five laser radars, the angle difference between every two laser radars is 45 degrees, and the barrier distances in the right left direction, the left front direction, the right front direction and the right direction of the trolley are respectively detected. The vehicle body is also provided with 32-bit ARM core processor, motor, GPS positioning module and other circuit modules or devices.
In this embodiment, the preprocessing in S100 includes:
s101: acquiring data of each sensor, namely acquiring data of five laser radar sensors;
s102: filtering data of each sensor, wherein the filtering is performed by adopting a Kalman filtering algorithm in the embodiment to eliminate illumination influence and Gaussian noise influence;
s103: and performing data fusion on the data of each sensor.
In S200, as shown in fig. 1, the control model includes an input layer, a hidden layer, and an output layer, where the number of neurons in the input layer corresponds to the number of sensors; in other words, in the present application, the input layer is composed of five neurons, data detected by five lidar signals is used as a signal source node, and transmitted information is a distance and an angle of an environmental obstacle.
The hidden layer adopts a Gaussian radial basis functionAs an activation function; the laser radar data processing system is composed of six neurons and is used for carrying out nonlinear change on input laser radar data.
The output layer comprises two neurons for performing linear weighted output on the information output by the hidden layer neurons, and the two neurons respectively output control quantities of the vehicle speed and the angular speed.
S300 comprises the following steps:
s301: initializing neural network parameters, and configuring a learning rate eta and an iteration precision epsilon;
the initialization process is as follows:
a. determining an input vector X: x ═ X1,x2,x3,x4,x5,x6]T;
b. Determining an output vector Y and a desired output vector O: y ═ Y1,y2]T,O=[o1,o2]T;
c. Initializing weights from a hidden layer to an output layer: wij=[ωi1,ωi2]T,(i=1,2,3,4,5,6);
d. Initializing central parameters of each neuron of the hidden layer: ck=[ci1,ci2,ci3,ci4,ci5,ci6];
e. Initializing a width vector: dk=[dk1,dk2,dk3,dk4,dk5,dk6]。
After initialization is finished, calculating the output value of each neuron of the hidden layer and calculating the output of each neuron of the output layer;
s302: calculating the value of the root mean square error RMS output by the network, if the value of the root mean square error is less than or equal to the iteration precision, namely the RMS is less than or equal to epsilon, ending the training, otherwise executing S303;
wherein: o isijThe expected value of the jth output neuron at the ith input sample; y isijThe net output value of the jth output neuron at the ith input sample is obtained.
S303: the weight parameter, the center parameter, and the width parameter of the neural network model are iteratively trained using a gradient descent method, and then S302 is performed.
In S303, the weight parameter, the center parameter, and the width parameter are adjusted according to the following formulas:
wherein, ω isji(t) weight parameters between the jth output layer neuron and the ith hidden layer neuron at the time of the t iterative computation; c. Cik(t) central parameters for the ith hidden layer neuron to the kth input layer neuron at the tth iteration; dikIs and center cik(t) corresponding width parameters; eta is a learning factor;
i is an integer of 1 to ni,niNumber of hidden layer neurons; j is 1, 2; k is an integer of 1 to nk,nkThe number of neurons in the input layer; 0<η<1; in this example, ni=6,i=1,2,3,4,5,6;nk=5,k=1,2,3,4,5。
E is the cost function of the RBF neural network,Oijfor the j output layer neuron's expected value at the i hidden layer neuron input sample; y isijThe output value of the jth output neuron when the sample is input to the ith hidden layer neuron.
When the technical scheme of the embodiment is operated, as shown in fig. 2, the distance of the obstacle is acquired through five laser radar sensors, the control model based on the RBF neural network model is adopted, data of the laser radar sensors are input, and control quantity of speed and angle can be generated according to the distance and angle information of the obstacle, so that intelligent control is realized, and processing logic and complexity of sensor data in the obstacle avoidance control process can be simplified.
Example two
The present embodiment is different from the first embodiment in that, in the present embodiment, the two neurons respectively output control target amounts for a vehicle speed and an angular velocity, and the present embodiment further includes:
s500: acquiring the speed and angular speed of a current vehicle;
s600: the control of the two drive wheels is performed based on the target amounts of the vehicle speed and the angular velocity of the output layer and the current speed and the angular velocity of the vehicle.
EXAMPLE III
The difference between this embodiment and the second embodiment is that, in this embodiment, the method further includes:
s700: recording data of each sensor and corresponding vehicle speed and angular velocity to form a data set;
s800: screening abnormal data in the data set according to a data screening rule;
s900: correcting the abnormal data, and constructing a corrected data set according to the abnormal data correction result;
s1000: and performing iterative training on the control model by correcting the data set.
In the technical scheme of the embodiment, the place where the current control model is inaccurate can be judged by screening abnormal data, and then iterative training is carried out by constructing a correction data set, so that the training precision is improved.
Example four
The difference between this embodiment and the third embodiment is that, in this embodiment, the method further includes:
s1100: constructing a sensor data inference model based on an LSTM neural network model;
s1200: constructing a training data set to train the sensor data inference model;
s1300: inputting data of an existing data set and a sample training set into a sensor data inference model, and inputting the position and the number of sensors to be predicted;
s1400: the sensor data inference model predicts detection data corresponding to other sensor positions according to data of an existing data set sample training set;
s1500: constructing data training sets corresponding to the number of other sensors according to the prediction result and the existing data set and sample training set;
s1600: and training the control model according to the data training set obtained in the step S1500.
In the technical scheme of the embodiment, by constructing the sensor data inference model, data corresponding to positions of other sensors can be pushed on the basis of an existing data set sample training set, and training sets and control models of different numbers of sensors are constructed, for example, training data sets of five sensors are currently available.
EXAMPLE five
The difference between this embodiment and the third embodiment lies in that, in this embodiment, a high accuracy control model and a low accuracy control model are provided, and the analysis dimension number related to the high accuracy control model and the low accuracy control model, that is, the number of input layer neurons and hidden layer neurons is different, and the method further includes:
s1700: the driving is controlled through a low-precision control model, and the storage module stores the current driving path data;
s1800: optimizing and analyzing the path data through a high-precision control model to generate an optimized driving path;
s1900: and when the condition that the vehicle runs on the same path again is detected, calling the optimized driving path and controlling the vehicle running according to the optimized driving path.
In this embodiment, the high-accuracy control model may be stored in the server or may be set in the vehicle control system, and by using the low-accuracy control model and the high-accuracy control model in sequence, the processing overhead of the vehicle may be reduced, but the optimal path is ensured to be achieved by optimizing the path of the vehicle traveling for many times.
The above are merely examples of the present invention, and the present invention is not limited to the field related to this embodiment, and the common general knowledge of the known specific structures and characteristics in the schemes is not described herein too much, and those skilled in the art can know all the common technical knowledge in the technical field before the application date or the priority date, can know all the prior art in this field, and have the ability to apply the conventional experimental means before this date, and those skilled in the art can combine their own ability to perfect and implement the scheme, and some typical known structures or known methods should not become barriers to the implementation of the present invention by those skilled in the art in light of the teaching provided in the present application. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.
Claims (7)
1. An unmanned vehicle control method based on an RBF neural network is characterized in that: the method comprises the following steps:
s100: acquiring barrier data and preprocessing the barrier data;
s200: establishing a control model based on an RBF neural network model;
s300: constructing a sample training sample set, and training a control model;
s400: and inputting the preprocessed barrier data into a control model for processing, and outputting control parameters.
2. The unmanned vehicle control method based on RBF neural network of claim 1, wherein: the preprocessing in the S100 includes:
s101: acquiring data of each sensor;
s102: carrying out filtering processing on data of each sensor, wherein the filtering processing adopts a Kalman filtering algorithm;
s103: and carrying out data fusion on the data of each sensor to obtain the data of the obstacle.
3. The unmanned vehicle control method based on RBF neural network of claim 2, wherein: in S200, the control model includes an input layer, a hidden layer, and an output layer, where the number of neurons in the input layer corresponds to the number of sensors; the hidden layer adopts a Gaussian radial basis function as an activation function; the output layer includes two neurons that output control target amounts for the vehicle speed and the angular speed, respectively.
4. The unmanned vehicle control method based on RBF neural network of claim 3, wherein: s300 comprises the following steps:
s301: initializing neural network parameters, and configuring learning rate and iteration precision;
s302: calculating a root mean square error value output by the network, if the root mean square error value is less than or equal to the iteration precision, ending the training, otherwise executing S303;
s303: the weight parameter, the center parameter, and the width parameter of the neural network model are iteratively trained using a gradient descent method, and then S302 is performed.
5. The unmanned vehicle control method based on RBF neural network of claim 4, wherein: in S303, the weight parameter, the center parameter, and the width parameter are adjusted according to the following formulas:
wherein, ω isji(t) weight parameters between the jth output layer neuron and the ith hidden layer neuron at the time of the t iterative computation; c. Cik(t) central parameters for the ith hidden layer neuron to the kth input layer neuron at the tth iteration; dikIs and center cik(t) corresponding width parameters; eta is a learning factor;
i is an integer of 1 to ni,niNumber of hidden layer neurons; j is 1, 2; k is an integer of 1 to nk,nkThe number of neurons in the input layer; eta is more than 0 and less than 1;
6. The unmanned vehicle control method based on RBF neural network of claim 5, wherein: unmanned car includes two drive wheels, unmanned car still includes through the speed difference control steering angle of two drive wheels:
s500: acquiring the speed and angular speed of a current vehicle;
s600: the control of the two drive wheels is performed based on the vehicle speed, the target amount of angular velocity, the current vehicle speed, and the angular velocity of the output layer.
7. The unmanned vehicle control method based on RBF neural network of claim 6, wherein: further comprising:
s700: recording data of each sensor and corresponding vehicle speed and angular velocity to form a data set;
s800: screening abnormal data in the data set according to a data screening rule;
s900: correcting the abnormal data, and constructing a corrected data set according to the abnormal data correction result;
s1000: and performing iterative training on the control model by correcting the data set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011618750.7A CN112651456B (en) | 2020-12-31 | 2020-12-31 | Unmanned vehicle control method based on RBF neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011618750.7A CN112651456B (en) | 2020-12-31 | 2020-12-31 | Unmanned vehicle control method based on RBF neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112651456A true CN112651456A (en) | 2021-04-13 |
CN112651456B CN112651456B (en) | 2023-08-08 |
Family
ID=75366629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011618750.7A Active CN112651456B (en) | 2020-12-31 | 2020-12-31 | Unmanned vehicle control method based on RBF neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112651456B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113219130A (en) * | 2021-04-16 | 2021-08-06 | 中国农业大学 | Calibration method and test platform of multi-parameter gas sensor |
CN114894289A (en) * | 2022-06-20 | 2022-08-12 | 江苏省计量科学研究院(江苏省能源计量数据中心) | Large-mass comparator based on data fusion algorithm |
WO2024044032A1 (en) * | 2022-08-26 | 2024-02-29 | Zoox, Inc. | Interpretable kalman filter comprising neural network component(s) for autonomous vehicles |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5761626A (en) * | 1995-12-26 | 1998-06-02 | Ford Global Technologies, Inc. | System and method for distinguishing and characterizing motor vehicles for control of automatic drivers |
US20070203616A1 (en) * | 2004-06-25 | 2007-08-30 | Eric Borrmann | Motor vehicle control device provided with a neuronal network |
CN103454919A (en) * | 2013-08-19 | 2013-12-18 | 江苏科技大学 | Motion control system and method of mobile robot in intelligent space |
CN106054893A (en) * | 2016-06-30 | 2016-10-26 | 江汉大学 | Intelligent vehicle control system and method |
CN107121924A (en) * | 2017-03-03 | 2017-09-01 | 中国农业大学 | A kind of Visual Environment regulator control system and method based on RBF neural |
CN108427417A (en) * | 2018-03-30 | 2018-08-21 | 北京图森未来科技有限公司 | Automatic driving control system and method, computer server and automatic driving vehicle |
CN108594788A (en) * | 2018-03-27 | 2018-09-28 | 西北工业大学 | A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm |
CN208393354U (en) * | 2018-06-22 | 2019-01-18 | 南京航空航天大学 | Line operating condition automatic Pilot steering system is moved based on BP neural network and safe distance |
CN109447164A (en) * | 2018-11-01 | 2019-03-08 | 厦门大学 | A kind of motor behavior method for classifying modes, system and device |
CN110509916A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of body gesture antihunt means and system based on deep neural network |
CN110716498A (en) * | 2019-10-30 | 2020-01-21 | 北京航天发射技术研究所 | Sensor control method and device for vehicle-mounted erecting frame |
CN110782033A (en) * | 2019-10-28 | 2020-02-11 | 玲睿(上海)医疗科技有限公司 | AGV positioning method based on fuzzy neural network |
CN110782481A (en) * | 2019-10-18 | 2020-02-11 | 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) | Unmanned ship intelligent decision method and system |
US20200117205A1 (en) * | 2018-10-15 | 2020-04-16 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling vehicle |
CN111542836A (en) * | 2017-10-04 | 2020-08-14 | 华为技术有限公司 | Method for selecting action for object by using neural network |
CN111624522A (en) * | 2020-05-29 | 2020-09-04 | 上海海事大学 | Ant colony optimization-based RBF neural network control transformer fault diagnosis method |
CN111753371A (en) * | 2020-06-04 | 2020-10-09 | 纵目科技(上海)股份有限公司 | Training method, system, terminal and storage medium for vehicle body control network model |
CN111886170A (en) * | 2018-03-28 | 2020-11-03 | 日立汽车系统株式会社 | Vehicle control device |
CN111967087A (en) * | 2020-07-16 | 2020-11-20 | 山东派蒙机电技术有限公司 | Neural network-based online vehicle decision control model establishing and evaluating method |
CN112084709A (en) * | 2020-09-04 | 2020-12-15 | 西安交通大学 | Large-scale generator insulation state evaluation method based on genetic algorithm and radial basis function neural network |
CN112109705A (en) * | 2020-09-23 | 2020-12-22 | 同济大学 | Collision avoidance optimization control system and method for extended-range distributed driving electric vehicle |
-
2020
- 2020-12-31 CN CN202011618750.7A patent/CN112651456B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5761626A (en) * | 1995-12-26 | 1998-06-02 | Ford Global Technologies, Inc. | System and method for distinguishing and characterizing motor vehicles for control of automatic drivers |
US20070203616A1 (en) * | 2004-06-25 | 2007-08-30 | Eric Borrmann | Motor vehicle control device provided with a neuronal network |
CN103454919A (en) * | 2013-08-19 | 2013-12-18 | 江苏科技大学 | Motion control system and method of mobile robot in intelligent space |
CN106054893A (en) * | 2016-06-30 | 2016-10-26 | 江汉大学 | Intelligent vehicle control system and method |
CN107121924A (en) * | 2017-03-03 | 2017-09-01 | 中国农业大学 | A kind of Visual Environment regulator control system and method based on RBF neural |
CN111542836A (en) * | 2017-10-04 | 2020-08-14 | 华为技术有限公司 | Method for selecting action for object by using neural network |
CN108594788A (en) * | 2018-03-27 | 2018-09-28 | 西北工业大学 | A kind of aircraft actuator fault detection and diagnosis method based on depth random forests algorithm |
CN111886170A (en) * | 2018-03-28 | 2020-11-03 | 日立汽车系统株式会社 | Vehicle control device |
CN108427417A (en) * | 2018-03-30 | 2018-08-21 | 北京图森未来科技有限公司 | Automatic driving control system and method, computer server and automatic driving vehicle |
CN208393354U (en) * | 2018-06-22 | 2019-01-18 | 南京航空航天大学 | Line operating condition automatic Pilot steering system is moved based on BP neural network and safe distance |
US20200117205A1 (en) * | 2018-10-15 | 2020-04-16 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling vehicle |
CN109447164A (en) * | 2018-11-01 | 2019-03-08 | 厦门大学 | A kind of motor behavior method for classifying modes, system and device |
CN110509916A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of body gesture antihunt means and system based on deep neural network |
CN110782481A (en) * | 2019-10-18 | 2020-02-11 | 华中光电技术研究所(中国船舶重工集团有限公司第七一七研究所) | Unmanned ship intelligent decision method and system |
CN110782033A (en) * | 2019-10-28 | 2020-02-11 | 玲睿(上海)医疗科技有限公司 | AGV positioning method based on fuzzy neural network |
CN110716498A (en) * | 2019-10-30 | 2020-01-21 | 北京航天发射技术研究所 | Sensor control method and device for vehicle-mounted erecting frame |
CN111624522A (en) * | 2020-05-29 | 2020-09-04 | 上海海事大学 | Ant colony optimization-based RBF neural network control transformer fault diagnosis method |
CN111753371A (en) * | 2020-06-04 | 2020-10-09 | 纵目科技(上海)股份有限公司 | Training method, system, terminal and storage medium for vehicle body control network model |
CN111967087A (en) * | 2020-07-16 | 2020-11-20 | 山东派蒙机电技术有限公司 | Neural network-based online vehicle decision control model establishing and evaluating method |
CN112084709A (en) * | 2020-09-04 | 2020-12-15 | 西安交通大学 | Large-scale generator insulation state evaluation method based on genetic algorithm and radial basis function neural network |
CN112109705A (en) * | 2020-09-23 | 2020-12-22 | 同济大学 | Collision avoidance optimization control system and method for extended-range distributed driving electric vehicle |
Non-Patent Citations (4)
Title |
---|
JIAJIA CHEN等: "Motion Planning for Autonomous Vehicle Based on Radial Basis Function Neural Network in Unstructured Environment", 《SENSORS》 * |
严修红等: "基于数据预处理灰色神经网络组合和集成预测", 《智能系统学报》 * |
周小强: "无人机-无人车空地联合编队控制研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
李超群: "基于深度学习的智能汽车自动转向技术研究" * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113219130A (en) * | 2021-04-16 | 2021-08-06 | 中国农业大学 | Calibration method and test platform of multi-parameter gas sensor |
CN113219130B (en) * | 2021-04-16 | 2022-08-02 | 中国农业大学 | Calibration method and test platform of multi-parameter gas sensor |
CN114894289A (en) * | 2022-06-20 | 2022-08-12 | 江苏省计量科学研究院(江苏省能源计量数据中心) | Large-mass comparator based on data fusion algorithm |
CN114894289B (en) * | 2022-06-20 | 2024-02-02 | 江苏省计量科学研究院(江苏省能源计量数据中心) | Large-mass comparator based on data fusion algorithm |
WO2024044032A1 (en) * | 2022-08-26 | 2024-02-29 | Zoox, Inc. | Interpretable kalman filter comprising neural network component(s) for autonomous vehicles |
Also Published As
Publication number | Publication date |
---|---|
CN112651456B (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11958554B2 (en) | Steering control for vehicles | |
CN112651456B (en) | Unmanned vehicle control method based on RBF neural network | |
Min et al. | RNN-based path prediction of obstacle vehicles with deep ensemble | |
CN109866752A (en) | Double mode parallel vehicles track following driving system and method based on PREDICTIVE CONTROL | |
CN110531750A (en) | The embedded LQR of PID for automatic driving vehicle | |
US20200047752A1 (en) | Vehicle lateral motion control | |
CN111948938B (en) | Slack optimization model for planning open space trajectories for autonomous vehicles | |
CN109491369B (en) | Method, device, equipment and medium for evaluating performance of vehicle actual control unit | |
US20120072075A1 (en) | Steering control device of autonomous vehicle, autonomous vehicle having the same and steering control method of autonomous vehicle | |
CN109416539A (en) | The method and system of the course changing control of the autonomous vehicle of use ratio, integral and differential (PID) controller | |
WO2023034321A1 (en) | Calibrating multiple inertial measurement units | |
US11453409B2 (en) | Extended model reference adaptive control algorithm for the vehicle actuation time-latency | |
CN111142091A (en) | Automatic driving system laser radar online calibration method fusing vehicle-mounted information | |
KR20190123736A (en) | Device for controlling the track of the vehicle | |
US20230365145A1 (en) | Method, system and computer program product for calibrating and validating a driver assistance system (adas) and/or an automated driving system (ads) | |
KR20210133904A (en) | Dynamic model evaluation package for autonomous driving vehicles | |
Guidolini et al. | Neural-based model predictive control for tackling steering delays of autonomous cars | |
CN111257853B (en) | Automatic driving system laser radar online calibration method based on IMU pre-integration | |
CN116337045A (en) | High-speed map building navigation method based on karto and teb | |
CN115436917A (en) | Synergistic estimation and correction of LIDAR boresight alignment error and host vehicle positioning error | |
CN116736855A (en) | Method and system for assessing autonomous driving planning and control | |
Cao et al. | End-to-end adaptive cruise control based on timing network | |
Fazekas et al. | Model based vehicle localization via an iterative parameter estimation | |
US20220300851A1 (en) | System and method for training a multi-task model | |
CN115731531A (en) | Object trajectory prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |