CN110991377B - Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network - Google Patents

Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network Download PDF

Info

Publication number
CN110991377B
CN110991377B CN201911263299.9A CN201911263299A CN110991377B CN 110991377 B CN110991377 B CN 110991377B CN 201911263299 A CN201911263299 A CN 201911263299A CN 110991377 B CN110991377 B CN 110991377B
Authority
CN
China
Prior art keywords
region
neural network
basis function
layer
radial basis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911263299.9A
Other languages
Chinese (zh)
Other versions
CN110991377A (en
Inventor
陈学文
裴月莹
陈华清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University of Technology
Original Assignee
Liaoning University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University of Technology filed Critical Liaoning University of Technology
Priority to CN201911263299.9A priority Critical patent/CN110991377B/en
Publication of CN110991377A publication Critical patent/CN110991377A/en
Application granted granted Critical
Publication of CN110991377B publication Critical patent/CN110991377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for identifying a front target of an automobile safety auxiliary system based on a radial basis function neural network, which comprises the following steps: acquiring road condition images in front of a vehicle, and carrying out segmentation pretreatment on the road condition images; extracting edges of the preprocessed road condition images, and searching to obtain an interested region; extracting features of the region of interest to obtain edge features and region features corresponding to the region of interest; constructing a radial basis neural network model by taking the edge features and the region features as input layer vectors, and analyzing the input layer vector features in a neural network to obtain output quantity related to a target; according to the output quantity, a corresponding vehicle target is obtained, and the vehicle target is output as a recognition result.

Description

Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network
Technical Field
The invention relates to the field of automobile safety auxiliary driving control, in particular to a front destination identification method of an automobile safety auxiliary system based on a monocular vision neural network.
Background
The automobile safety auxiliary driving system (ADAS) is used for rapidly and accurately extracting information such as vehicles or obstacles in front of the system by using radar or machine vision, and can prompt a driver to avoid collision danger or automatically control the vehicles to realize the early warning or collision avoidance function. The system function is not only suitable for expressway driving conditions, but also particularly important for identifying targets such as non-motor vehicles, pedestrians, obstacles and the like in front of an ADAS system and particularly identifying electric bicycles for other driving environments, especially urban road driving environments, because traffic accident rates of the non-motor vehicles (mainly electric bicycles) and the motor vehicles frequently occur and the proportion is large in urban roads.
Currently, there are few studies on classification of motor vehicles and electric bicycles based on a machine vision detection and identification method. Most methods are directed to detection and identification studies of vehicles and are used in auxiliary driving systems. Such as by using linear geometric feature information of the vehicle, symmetry of the vehicle, or computer vision methods employing special hardware such as color CCDs and binocular CCDs, etc. In addition, there are an optical flow-based method, a template matching method, a support vector machine method, a neural network training method, a multi-sensor information fusion method, a vehicle detection and recognition method based on an AdaBoost method and a support vector machine classifier, and a vehicle detection or recognition method based on deep learning and a high-speed-area convolutional neural network. The research method for vehicle detection has the end result that the region of interest of the vehicle is determined, but the region still has the possibility of false detection, and if the detection result in the region of interest can be deeply confirmed, the accuracy of vehicle target identification can be greatly improved, and the false detection rate is reduced to increase the reliability of system identification.
Disclosure of Invention
The invention designs and develops a front mesh identification method of an automobile safety auxiliary system based on a radial basis function neural network, which performs edge extraction and search on preprocessed road condition images to obtain a region of interest as a possible vehicle region, and constructs a radial basis function neural network vehicle identifier by counting edge characteristics and region characteristic parameters of the vehicle to realize classification of the vehicle and the electric bicycle in a detection region.
The technical scheme provided by the invention is as follows:
a method for identifying the front destination of an automobile safety auxiliary system based on a radial basis function neural network, comprising the following steps:
acquiring road condition images in front of a vehicle, and carrying out segmentation pretreatment on the road condition images;
extracting edges of the preprocessed road condition images, and searching to obtain an interested region;
extracting features of the region of interest to obtain edge features and region features corresponding to the region of interest;
constructing a radial basis neural network model by taking the edge features and the region features as input layer vectors, and analyzing the input layer vector features in a neural network to obtain output quantity related to a target;
obtaining a corresponding vehicle target according to the output quantity, and outputting the vehicle target as a recognition result;
wherein the region of interest includes electric bicycles and motor vehicles.
Preferably, the edge features and the region features corresponding to the region of interest include: edge characteristic parameters and region description characteristic parameters formed by independent invariant moment parameters of sub-coefficients of discrete cosine transform.
Preferably, the discrete cosine transform sub-coefficient calculation formula is:
C(k)=|F(k)|/F(1);
wherein C (k) is a discrete cosineThe sub-coefficients of the transform are used,k is the number of discrete sub-coefficients, k=1, 2 …; f (k) =x (k) +jy (k); /> j is the imaginary part of the complex plane n=1, 2,3 … N-1; n is the characteristic point variable of the closed edge curve obtained by edge extraction after image segmentation, N is the number of characteristic points of the closed edge curve obtained by edge extraction after image segmentation, and f (m) =x (m) +jy (m); m is more than or equal to 1 and less than or equal to n, and f (m) is a one-dimensional complex sequence.
Preferably, the independent invariant moment parameter calculation formula is:
wherein ,mu, the coordinates of the central point of the region pq The central moment of the area where the binarized image is located; />m 00 Is zero order geometrical moment, m of the area where the binarized image is located 01 、m 10 Is the first-order geometric moment, m, of the area where the binarized image is located pq The geometric moment of the p+q order of the area where the binary image is located, p is the row order of the central moment of the binary image, and q is the column order of the central moment of the binary image.
Preferably, the target recognition characteristic parameters include: region eccentricity, ratio of short and long axes of the region, region area, region perimeter, and region compactness factor.
Preferably, the radial basis function neural network model is a three-layer neural network model:
the first layer is an input layer, and the feature vector is input into the network;
the second layer is a hidden layer which can be completely connected with the input layer, and the hidden layer node selects a Gaussian radial basis function as a transfer function, and the calculation formula is as follows:
in the formula ,||xp -c i I is the European norm, c i Is the center of the Gaussian function, and sigma is the variance of the Gaussian function;
the third layer is an output layer, 2 output quantities are obtained by calculating weights between the hidden layer and the output layer, and a vehicle target is identified; the output of the network can be obtained from the structure of the radial basis function network as:
in the formula ,for the P-th input sample, p=1, 2, …, P is the total number of samples; c i Omega is the center of hidden layer node of network ij For the connection weight from the hidden layer to the output layer, i=1, 2, …, h is the number of nodes of the hidden layer, y j J=1, 2, …, n for the actual output of the j-th output node of the network for the input sample pair;
wherein ,dj Is the expected output value of the sample.
Preferably, the center of the radial basis function neural network is obtained by a K-means clustering algorithm, and the specific process is as follows:
step one, randomly selecting h training samples as a clustering center c i ,i=1,2,…h;
Step two, calculating the Euclidean distance between each training sample and the clustering center, and according to the Euclidean distanceDistance assigns each training sample to a respective cluster set ψ of input samples p (p=1, 2, …, P);
step three, recalculating each cluster set psi p Average value of training samples to obtain a new clustering center c i ′;
Step four, repeating the step two and the step three until a new clustering center c is obtained i ' the variation is less than a given threshold, c is obtained i ' is the final basis function center of the radial basis function neural network.
Preferably, the basis function variance solving formula is:
wherein ,σi As the basis function variance, c max To select the maximum distance between centers.
Preferably, the connection weight between the hidden layer and the output layer is calculated by a least square method, and the calculation formula is as follows:
the beneficial effects of the invention are that
The invention designs and develops a front target identification method of an automobile safety auxiliary system based on a radial basis function neural network, carries out edge extraction and search on a preprocessed road condition image, obtains a region of interest as a possible existing vehicle region, constructs a radial basis function neural network vehicle identifier by counting edge characteristics and region characteristic parameters of the vehicle, realizes classification of the vehicle and an electric bicycle in a detection region, greatly improves accuracy of vehicle target identification, and reduces false detection rate so as to improve reliability of system identification.
Drawings
Fig. 1 is a road condition image of the front of a vehicle according to the present invention.
Fig. 2 is a schematic diagram of a vehicle searching region of interest according to the present invention.
Fig. 3 is a block diagram of an RBF neural network according to the present invention.
FIG. 4 is a schematic diagram of an error performance curve of an RBF neural network test according to the present invention.
Detailed Description
The present invention is described in further detail below with reference to the drawings to enable those skilled in the art to practice the invention by referring to the description.
As shown in fig. 1, the method for identifying the front mesh of the automobile safety auxiliary system based on the radial basis function neural network provided by the invention comprises the following steps:
acquiring road condition images in front of a vehicle, and carrying out segmentation pretreatment on the road condition images;
extracting edges of the preprocessed road condition images, and searching to obtain an interested region;
as shown in fig. 2, extracting vehicle features from the region of interest to obtain edge features and region features corresponding to the region of interest;
constructing a radial basis neural network model by taking the edge features and the region features as input layer vectors, and analyzing the input layer vector features in a neural network to obtain output quantity related to a target;
and obtaining a corresponding vehicle target according to the output quantity, and outputting the vehicle target as a recognition result.
In order to accurately identify a vehicle target, it is necessary to extract vehicle-specific features from the detected image area. Feature extraction is the process of extracting and selecting information representing the features of a target from the original information of the target. At present, methods for identifying the shape of a target can be mainly classified into two types: one is based on the recognition of the shape of the edge of the object, i.e. the edge features. And secondly, identifying the shape of the area covered by the object, namely the area characteristic.
Feature extraction of the object is to take care of the following basic requirements:
(1) The extracted and selected features are insensitive to the variable parameters of the target;
(2) The characteristics are stable and easy to extract;
(3) The dimension of the feature quantity is obviously smaller than the original data of the target;
(4) The feature quantities have the smallest correlation;
(5) To improve classification accuracy, features are added that easily separate confusing categories.
Based on the above feature extraction requirements, the invention extracts the hybrid features of the vehicle (including the electric bicycle) and simultaneously includes edge features and regional features, namely 8 sub-coefficients of discrete cosine transform, edge feature parameters formed by 6 independent invariant moment parameters and 5 regional description feature parameters, and the total number of the edge feature parameters is 19.
Preferably, the discrete cosine transform sub-coefficient operation process is as follows:
the target image is cut, preprocessed and edge extracted to obtain outline data f (x m, y m), the closed edge curve formed by N points is put on complex plane to form one-dimensional complex sequence,
f(m)=x(m)+jy(m);1≤m≤n,
f (m) is a one-dimensional complex sequence; the discrete cosine transform of the formula is as follows:
calculating to obtain discrete cosine transform sub-coefficients:
C(k)=|F(k)|/F(1);
wherein C (k) is a discrete cosine transform sub-coefficient; k is the number of discrete sub-coefficients, k=1, 2 …; j is the imaginary part of the complex plane n=1, 2,3 … N-1; n is the characteristic point variable of the closed edge curve obtained by edge extraction after image segmentation, and N is the number of the characteristic points of the closed edge curve obtained by edge extraction after image segmentation.
Discrete cosine transform coefficients have translational, rotational and proportional invariance to the target and are insensitive to the starting point of the contour data. The low frequency part of the cosine transform coefficient reflects the whole outline of the image, the high frequency part only represents the detail of the outline, and the discrete cosine transform does not need complex operation and data modulo operation, and can obtain higher recognition rate by using fewer characteristic quantities.
If a digital image satisfies the segmentation continuity and there are only a limited number of zeros in the XY plane, it can be demonstrated that the moments of the orders of the digital image exist.
For a binary image, since the values of the pixels are only 0 and 1, assuming that the pixel value of the target area is 1 and the pixel value of the background area is 0, the p+q moment of the binary image is as shown in the formula:
the central moment of this region is:
wherein ,mu, the coordinates of the central point of the region pq The central moment of the area where the binarized image is located; />m 00 Is zero order geometrical moment, m of the area where the binarized image is located 01 、m 10 Is the first-order geometric moment, m, of the area where the binarized image is located pq The geometric moment of the p+q order of the area where the binary image is located, p is the row order of the central moment of the binary image, and q is the column order of the central moment of the binary image.
The target recognition characteristic parameters include: the area eccentricity, which is the ratio of the length of the major axis to the length of the minor axis of the boundary, i.e., the eccentricity of an ellipse having the same second moment as the area, is the ratio of the distance between the focal length major axes of the ellipse. Ratio of short axis to long axis of region, region area S, region perimeter L, and region compactness factor 4πS/L 2
Radial basis function neural networks are a novel neural network learning method that expands or preprocesses input vectors into a high-dimensional space. The method has good popularization capability, avoids complex calculation such as BP algorithm, and can realize the rapid learning of the neural network.
As shown in FIG. 3, the invention adopts RBF neural network of self-organizing selection center, and takes the edge characteristic parameters and 5 region description characteristic parameters formed by 8 discrete cosine transform coefficients, 6 independent invariant moment parameters as input vectors of the network. The discrimination result of whether the vehicle is or is not required to be recognized. Thus, the designed RBF neural network has 19-dimensional input neurons and 2 outputs.
The radial basis function neural network model is a three-layer neural network model: the first layer is an input layer, and feature vectors are input into the network. The second layer is a hidden layer, which is completely connected with the input layer (weight=1), and is equivalent to performing one-time conversion on the input mode, and converting the low-dimensional mode input data into a high-dimensional space so as to facilitate classification and identification of the output layer. Here the hidden layer node selects a gaussian radial basis function as the transfer function. The third layer is an output layer, and 2 output quantities are obtained by calculating weights between the hidden layer and the output layer, so that a vehicle target is identified.
The RBF neural network learning method of the self-organizing selection center mainly comprises two stages. Stage one: a self-organizing learning stage, namely a learning process without a teacher, and solving hidden layer basis functions; stage two: there is a teacher learning phase, i.e. solving weights between hidden layers to output layers.
The first layer is an input layer, and the feature vector is input into the network;
the second layer is a hidden layer which can be completely connected with the input layer, and the hidden layer node selects a Gaussian radial basis function as a transfer function, and the calculation formula is as follows:
in the formula ,||xp -c i I is the European norm, c i Is the center of Gaussian functionSigma is the variance of the gaussian function;
the third layer is an output layer, 2 output quantities are obtained by calculating weights between the hidden layer and the output layer, and a vehicle target is identified; the output of the network can be obtained from the structure of the radial basis function network as:
in the formula ,for the P-th input sample, p=1, 2, …, P is the total number of samples; c i Omega is the center of hidden layer node of network ij For the connection weight from the hidden layer to the output layer, i=1, 2, …, h is the number of nodes of the hidden layer, y j J=1, 2, …, n for the actual output of the j-th output node of the network for the input sample pair;
wherein ,dj Is the expected output value of the sample.
The center of the radial basis function neural network is obtained by a K-means clustering algorithm, and the specific process is as follows:
step one, randomly selecting h training samples as a clustering center c i ,i=1,2,…h;
Step two, calculating the Euclidean distance between each training sample and the clustering center, and distributing each training sample to each clustering set psi of the input samples according to the Euclidean distance p (p=1, 2, …, P);
step three, recalculating each cluster set psi p Average value of training samples to obtain a new clustering center c i ′;
Step four, repeating the step two and the step three until a new clustering center c is obtained i ' the variation is less than a given threshold, c is obtained i ' is the final basis function center of the radial basis function neural network.
The basis function variance solving formula is:
wherein ,σi As the basis function variance, c max To select the maximum distance between centers.
The connection weight between the hidden layer and the output layer is calculated by a least square method, and the calculation formula is as follows:
as shown in fig. 4, in order to verify the effectiveness of the method for identifying the front mesh of the ADAS system based on the RBF neural network, 850 vehicle samples and 850 electric bicycle samples are respectively established, the RBF neural network is trained, 60% of positive and negative sample images are randomly selected to complete the test, and the identification accuracy of the RBF neural network can reach more than 94%. The error performance curve of the RBF neural network can be used for finding that the designed network meets the requirement of training errors.
The invention designs and develops a front target identification method of an automobile safety auxiliary system based on a radial basis function neural network, carries out edge extraction and search on a preprocessed road condition image, obtains a region of interest as a possible existing vehicle region, constructs a radial basis function neural network vehicle identifier by counting edge characteristics and region characteristic parameters of the vehicle, realizes classification of the vehicle and an electric bicycle in a detection region, greatly improves accuracy of vehicle target identification, and reduces false detection rate so as to improve reliability of system identification.
Although embodiments of the present invention have been disclosed above, it is not limited to the details and embodiments shown and described, it is well suited to various fields of use, and further modifications may be readily made by those skilled in the art without departing from the general concepts defined by the claims and the equivalents thereof, and therefore the invention is not limited to the specific details and examples shown and described herein.

Claims (7)

1. A method for identifying the front destination of an automobile safety auxiliary system based on a radial basis function neural network, which is characterized by comprising the following steps:
acquiring road condition images in front of a vehicle, and carrying out segmentation pretreatment on the road condition images;
extracting edges of the preprocessed road condition images, and searching to obtain an interested region;
extracting features of the region of interest to obtain edge features and region features corresponding to the region of interest;
constructing a radial basis neural network model by taking the edge features and the region features as input layer vectors, and analyzing the input layer vector features in a neural network to obtain output quantity related to a target;
obtaining a corresponding vehicle target according to the output quantity, and outputting the vehicle target as a recognition result;
the region of interest includes electric bicycles and motor vehicles;
the vehicle edge features and the region features corresponding to the region of interest include: edge characteristic parameters and region description characteristic parameters formed by independent invariant moment parameters of sub-coefficients of discrete cosine transform;
the discrete cosine transform sub-coefficient calculation formula is:
C(k)=|F(k)|/F(1);
wherein C (k) is a discrete cosine transform sub-coefficient,k is the number of discrete sub-coefficients, k=1, 2 …; f (k) =x (k) +jy (k);
j is the imaginary part of the complex plane n=1, 2,3 … N-1; n is a characteristic point variable of a closed edge curve obtained by edge extraction after image segmentationN is the number of feature points of a closed edge curve obtained by edge extraction after image segmentation, f (m) =x (m) +jy (m); m is more than or equal to 1 and less than or equal to n, and f (m) is a one-dimensional complex sequence.
2. The method for identifying the front destination of an automobile safety auxiliary system based on a radial basis function neural network according to claim 1, wherein the independent invariant moment parameter calculation formula is:
wherein ,mu, the coordinates of the central point of the region pq The central moment of the area where the binarized image is located; />m 00 Is zero order geometrical moment, m of the area where the binarized image is located 01 、m 10 Is the first-order geometric moment, m, of the area where the binarized image is located pq The geometric moment of the p+q order of the area where the binary image is located, p is the row order of the central moment of the binary image, and q is the column order of the central moment of the binary image.
3. The method for identifying a front target of an automotive safety assistance system based on a radial basis function network according to claim 2, wherein the target identification characteristic parameters include: region eccentricity, ratio of short axis to long axis of the region, region area, region perimeter, and region compactness factor.
4. A method for identifying a front target of an automotive safety assistance system based on a radial basis function network according to claim 3, wherein the radial basis function network model is a three-layer neural network model:
the first layer is an input layer, and the feature vector is input into the network;
the second layer is a hidden layer which can be completely connected with the input layer, and the hidden layer node selects a Gaussian radial basis function as a transfer function, and the calculation formula is as follows:
in the formula ,||xp -c i I is the European norm, c i Is the center of the Gaussian function, and sigma is the variance of the Gaussian function;
the third layer is an output layer, 2 output quantities are obtained by calculating weights between the hidden layer and the output layer, and a vehicle target is identified; the output of the network can be obtained from the structure of the radial basis function network as:
in the formula ,for the P-th input sample, p=1, 2, …, P is the total number of samples; c i Omega is the center of hidden layer node of network ij For the connection weight from the hidden layer to the output layer, i=1, 2, …, h is the number of nodes of the hidden layer, y j J=1, 2, …, n for the actual output of the j-th output node of the network for the input sample pair;
wherein ,dj Is the expected output value of the sample.
5. The method for identifying the front mesh of the automobile safety auxiliary system based on the radial basis function neural network according to claim 4, wherein the center of the radial basis function neural network is obtained by a K-means clustering algorithm, and the specific process is as follows:
step one, randomly selecting h training aidsTraining samples as cluster center c i ,i=1,2,…h;
Step two, calculating the Euclidean distance between each training sample and the clustering center, and distributing each training sample to each clustering set psi of the input samples according to the Euclidean distance p (p=1, 2, …, P);
step three, recalculating each cluster set psi p Average value of training samples to obtain a new clustering center c i ′;
Step four, repeating the step two and the step three until a new clustering center c is obtained i ' the variation is less than a given threshold, c is obtained i ' is the final basis function center of the radial basis function neural network.
6. The method for identifying the front mesh of an automotive safety auxiliary system based on a radial basis function neural network according to claim 5, wherein the basis function variance solving formula is:
wherein ,σi As the basis function variance, c max To select the maximum distance between centers.
7. The method for identifying the front mesh of the automobile safety auxiliary system based on the radial basis function neural network according to claim 6, wherein the connection weight between the hidden layer and the output layer is calculated by a least square method, and the calculation formula is as follows:
CN201911263299.9A 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network Active CN110991377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911263299.9A CN110991377B (en) 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911263299.9A CN110991377B (en) 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network

Publications (2)

Publication Number Publication Date
CN110991377A CN110991377A (en) 2020-04-10
CN110991377B true CN110991377B (en) 2023-09-19

Family

ID=70092204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911263299.9A Active CN110991377B (en) 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network

Country Status (1)

Country Link
CN (1) CN110991377B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862608B (en) * 2020-07-22 2022-06-24 湖北文理学院 Vehicle driving road condition identification method, device, equipment and storage medium
CN112558510B (en) * 2020-10-20 2022-11-15 山东亦贝数据技术有限公司 Intelligent networking automobile safety early warning system and early warning method
CN114092701B (en) * 2021-12-04 2022-06-03 特斯联科技集团有限公司 Intelligent symbol identification method based on neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814160A (en) * 2010-03-08 2010-08-25 清华大学 RBF neural network modeling method based on feature clustering
CN103020582A (en) * 2012-09-20 2013-04-03 苏州两江科技有限公司 Method for computer to identify vehicle type by video image
CN106056147A (en) * 2016-05-27 2016-10-26 大连楼兰科技股份有限公司 System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
CN106960075A (en) * 2017-02-27 2017-07-18 浙江工业大学 The Forecasting Methodology of the injector performance of RBF artificial neural network based on linear direct-connected method
WO2018187953A1 (en) * 2017-04-12 2018-10-18 邹霞 Facial recognition method based on neural network
CN110261436A (en) * 2019-06-13 2019-09-20 暨南大学 Rail deformation detection method and system based on infrared thermal imaging and computer vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814160A (en) * 2010-03-08 2010-08-25 清华大学 RBF neural network modeling method based on feature clustering
CN103020582A (en) * 2012-09-20 2013-04-03 苏州两江科技有限公司 Method for computer to identify vehicle type by video image
CN106056147A (en) * 2016-05-27 2016-10-26 大连楼兰科技股份有限公司 System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
CN106960075A (en) * 2017-02-27 2017-07-18 浙江工业大学 The Forecasting Methodology of the injector performance of RBF artificial neural network based on linear direct-connected method
WO2018187953A1 (en) * 2017-04-12 2018-10-18 邹霞 Facial recognition method based on neural network
CN110261436A (en) * 2019-06-13 2019-09-20 暨南大学 Rail deformation detection method and system based on infrared thermal imaging and computer vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张小军 等.基于径向基函数神经网络的车型识别技术.西北大学学报(自然科学网络版).2006,第4卷(第2期),1-6. *
沈凤龙 ; 毕娟 ; .基于多神经网络分类器的汽车车型识别方法研究.辽东学院学报(自然科学版).(第03期),全文. *
袁艳 ; 叶俊浩 ; 苏丽娟 ; .基于改进的粒子群径向基神经网络的目标识别.计算机应用.(第S1期),全文. *

Also Published As

Publication number Publication date
CN110991377A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110991377B (en) Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network
CN108875608B (en) Motor vehicle traffic signal identification method based on deep learning
CN110263786B (en) Road multi-target identification system and method based on feature dimension fusion
Cao et al. A low-cost pedestrian-detection system with a single optical camera
CN112749616B (en) Multi-domain neighborhood embedding and weighting of point cloud data
JP2016062610A (en) Feature model creation method and feature model creation device
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN108830254B (en) Fine-grained vehicle type detection and identification method based on data balance strategy and intensive attention network
CN115937655B (en) Multi-order feature interaction target detection model, construction method, device and application thereof
CN112381101B (en) Infrared road scene segmentation method based on category prototype regression
CN103620645A (en) Object recognition device
Seeger et al. Towards road type classification with occupancy grids
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
CN114821519A (en) Traffic sign identification method and system based on coordinate attention
CN112559968B (en) Driving style representation learning method based on multi-situation data
Danapal et al. Sensor fusion of camera and LiDAR raw data for vehicle detection
JP5407723B2 (en) Recognition device, recognition method, and program
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN115129886A (en) Driving scene recognition method and device and vehicle
CN114022705B (en) Self-adaptive target detection method based on scene complexity pre-classification
CN114821508A (en) Road three-dimensional target detection method based on implicit context learning
Rafi et al. Performance analysis of deep learning YOLO models for South Asian regional vehicle recognition
Rostami et al. An Image Dataset of Vehicles Front Views and Parts for Vehicle Detection, Localization and Alignment Applications
Guma et al. Design a hybrid approach for the classification and recognition of traffic signs using machine learning
Yalla et al. CHASE Algorithm:" Ease of Driving" Classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant