CN110991377A - Monocular visual neural network-based front target identification method for automobile safety auxiliary system - Google Patents

Monocular visual neural network-based front target identification method for automobile safety auxiliary system Download PDF

Info

Publication number
CN110991377A
CN110991377A CN201911263299.9A CN201911263299A CN110991377A CN 110991377 A CN110991377 A CN 110991377A CN 201911263299 A CN201911263299 A CN 201911263299A CN 110991377 A CN110991377 A CN 110991377A
Authority
CN
China
Prior art keywords
neural network
region
basis function
radial basis
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911263299.9A
Other languages
Chinese (zh)
Other versions
CN110991377B (en
Inventor
陈学文
裴月莹
陈华清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning University of Technology
Original Assignee
Liaoning University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning University of Technology filed Critical Liaoning University of Technology
Priority to CN201911263299.9A priority Critical patent/CN110991377B/en
Publication of CN110991377A publication Critical patent/CN110991377A/en
Application granted granted Critical
Publication of CN110991377B publication Critical patent/CN110991377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for identifying a front target of an automobile safety auxiliary system based on a radial basis function neural network, which comprises the following steps: acquiring a road condition image in front of an automobile, and carrying out segmentation pretreatment on the road condition image; performing edge extraction on the preprocessed road condition image, and searching to obtain an interested area; extracting the characteristics of the region of interest to obtain edge characteristics and area characteristics corresponding to the region of interest; constructing a radial basis function neural network model by taking the edge characteristics and the region characteristics as input layer vectors, and analyzing the input layer vector characteristics in the neural network to obtain output quantity related to the target; according to the method, the corresponding vehicle target is obtained according to the output quantity, and the vehicle target is output as a recognition result.

Description

Monocular visual neural network-based front target identification method for automobile safety auxiliary system
Technical Field
The invention relates to the field of automobile safety auxiliary driving control, in particular to a method for identifying a front target of an automobile safety auxiliary system based on a monocular vision neural network.
Background
An automobile safety assistant driving system (ADAS) is used for rapidly and accurately extracting information of vehicles or barriers in front of the system by applying radar or machine vision, and can remind a driver of avoiding collision danger or automatically control the vehicles to realize an early warning or collision avoidance function. The system function is not only suitable for the running condition of the expressway, but also is especially important for identifying the vehicle targets in front of the ADAS system and identifying the non-motor vehicles, pedestrians, obstacles and other targets, especially for identifying the electric bicycles in other running environments, especially in the running environment of urban roads, because the traffic accident rate of the non-motor vehicles (mainly the electric bicycles) and the motor vehicles in the urban roads frequently occurs and the occupied proportion is large.
At present, a detection and identification method based on machine vision is rarely researched for classifying motor vehicles and electric bicycles. Most methods are directed to vehicle detection and identification studies and are used in driver assistance systems. Such as using the linear geometry characteristics of the vehicle, the symmetry of the vehicle, or using special hardware, such as computer vision methods for color CCDs and binocular CCDs. In addition, there are an optical flow-based method, a template matching method, a support vector machine method, a method using neural network training, and a method of multi-sensor information fusion, a vehicle detection and recognition method based on the AdaBoost method and a support vector machine classifier, and a vehicle detection or recognition method based on deep learning and a high-speed-area convolution neural network. In the above vehicle detection research method, the final result is to determine the region of interest where the vehicle exists, but the region still has the possibility of false detection, and if the detection result in the region of interest can be deeply confirmed, the accuracy of vehicle target identification can be greatly improved, and the false detection rate is reduced to increase the reliability of system identification.
Disclosure of Invention
The invention designs and develops a method for identifying the front destination of an automobile safety auxiliary system based on a radial basis function neural network, which is used for carrying out edge extraction on a preprocessed road condition image, searching the edge extraction, acquiring a sensitive interest area as a possibly existing vehicle area, and constructing a radial basis function neural network vehicle recognizer by counting edge characteristics and area characteristic parameters of vehicles to realize the classification of the vehicles and electric bicycles in the detection area.
The technical scheme provided by the invention is as follows:
a method for identifying a front target of an automobile safety auxiliary system based on a radial basis function neural network comprises the following steps:
acquiring a road condition image in front of an automobile, and carrying out segmentation pretreatment on the road condition image;
performing edge extraction on the preprocessed road condition image, and searching to obtain an interested area;
extracting the characteristics of the region of interest to obtain edge characteristics and region characteristics corresponding to the region of interest;
constructing a radial basis function neural network model by taking the edge characteristics and the region characteristics as input layer vectors, and analyzing the input layer vector characteristics in the neural network to obtain output quantity related to the target;
obtaining a corresponding vehicle target according to the output quantity, and outputting the vehicle target as a recognition result;
wherein the region of interest includes an electric bicycle and a motor vehicle.
Preferably, the edge feature and the region feature corresponding to the region of interest include: the edge characteristic parameter and the region description characteristic parameter are formed by independent invariant moment parameters of the sub-numbers of the discrete cosine transform.
Preferably, the sub-numbers of the discrete cosine transform are calculated by the following formula:
C(k)=|F(k)|/F(1);
wherein C (k) is the number of discrete cosine transform subsystems,
Figure BDA0002312166020000021
k is the number of the discrete sub coefficients, and k is 1,2 … 8; f (k) ═ x (k) + jy (k);
Figure BDA0002312166020000022
Figure BDA0002312166020000023
j is the imaginary part N of the complex plane 1,2,3 … N-1; n is a variable of a characteristic point of a closed edge curve obtained by performing edge extraction after image segmentation, N is the number of the characteristic points of the closed edge curve obtained by performing edge extraction after image segmentation, and f (m) ═ x (m) + jy (m); m is more than or equal to 1 and less than or equal to n, and f (m) is a one-dimensional complex sequence.
Preferably, the independent invariant parameter calculation formula is:
Figure BDA0002312166020000031
wherein ,
Figure BDA0002312166020000032
is the coordinate of the center point of the region, mupqIs the central moment of the region in which the binarized image is located
Figure BDA0002312166020000033
m00Is the zero order geometric moment, m, of the region where the binary image is located01、m10Is the first-order geometric moment, m, of the region where the binary image is locatedpqThe image is a binary image in the region of the geometric moment of order p + q, p is the row order of the central moment of the binary image, and q is the column order of the central moment of the binary image.
Preferably, the target identification feature parameters include: region eccentricity, ratio of short axis to long axis of the region, region area, region perimeter, and region compactness factor.
Preferably, the radial basis function neural network model is a three-layer neural network model:
the first layer is an input layer and finishes inputting the feature vectors into the network;
the second layer is a hidden layer and can be completely connected with the input layer, the hidden layer node selects a Gaussian radial basis function as a transfer function, and the calculation formula is as follows:
Figure BDA0002312166020000034
in the formula ,||xp-ciI is the European norm, ciIs the center of the Gaussian function, and sigma is the variance of the Gaussian function;
the third layer is an output layer, 2 output quantities are obtained by calculating the weight between the hidden layer and the output layer, and the vehicle target is identified; the output of the network, which can be derived from the structure of the radial basis function neural network, is:
Figure BDA0002312166020000035
in the formula ,
Figure BDA0002312166020000036
p is the P-th input sample, P is 1,2, …, P is the total number of samples; c. CiFor the centre of the hidden layer node of the network, omegaijThe connection weight from the hidden layer to the output layer is 1,2, …, h is the number of nodes in the hidden layer, yjJ is the actual output of the jth output node of the network for the input sample pair, j being 1,2, …, n;
Figure BDA0002312166020000037
wherein ,djIs the expected output value of the sample.
Preferably, the center of the radial basis function neural network is obtained by a K-means clustering algorithm, and the specific process is as follows:
step one, randomly selecting h training samples as a clustering center ci,i=1,2,…h;
Step two, calculating each training sample and clustering centerEuclidean distances, and assigning each training sample to a respective cluster set ψ of input samples according to said Euclidean distancesp(P ═ 1,2, …, P);
step three, recalculating each cluster set psipAverage value of middle training sample to obtain new clustering center ci′;
Step four, repeating the step two and the step three until a new clustering center ci' variation less than a given threshold, then c is obtainedi' is the final basis function center of the radial basis function neural network.
Preferably, the basis function variance solving formula is:
Figure BDA0002312166020000041
wherein ,σiIs the variance of the basis function, cmaxThe maximum distance between the centers is chosen.
Preferably, the connection weight between the hidden layer and the output layer is calculated by a least square method, and the calculation formula is as follows:
Figure BDA0002312166020000042
the invention has the advantages of
The invention designs and develops a method for identifying the front target of an automobile safety auxiliary system based on a radial basis function neural network, which is used for extracting the edge of a preprocessed road condition image, searching the edge, acquiring a sensitive interest area as a possible vehicle area, and constructing a radial basis function neural network vehicle identifier by counting the edge characteristic and the area characteristic parameter of a vehicle, thereby realizing the classification of the vehicle and an electric bicycle in a detection area, greatly improving the accuracy of vehicle target identification, and reducing the false detection rate to increase the reliability of system identification.
Drawings
Fig. 1 is an image of road conditions in front of an automobile according to the present invention.
Fig. 2 is a schematic diagram of a vehicle search region of interest according to the present invention.
Fig. 3 is a diagram of the RBF neural network according to the present invention.
FIG. 4 is a schematic diagram of an error performance curve of the RBF neural network test according to the present invention.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description.
As shown in fig. 1, the method for identifying the front destination of the car safety assistance system based on the radial basis function neural network provided by the invention comprises the following steps:
acquiring a road condition image in front of an automobile, and carrying out segmentation pretreatment on the road condition image;
performing edge extraction on the preprocessed road condition image, and searching to obtain an interested area;
as shown in fig. 2, extracting vehicle features of the region of interest, and acquiring edge features and region features corresponding to the region of interest;
constructing a radial basis function neural network model by taking the edge characteristics and the region characteristics as input layer vectors, and analyzing the input layer vector characteristics in the neural network to obtain output quantity related to the target;
and obtaining a corresponding vehicle target according to the output quantity, and outputting the vehicle target as a recognition result.
In order to accurately identify a vehicle target, features specific to the vehicle need to be extracted for the detected image area. Feature extraction is a process of extracting and selecting feature information capable of representing an object from original information of the object. At present, the methods for identifying the target shape can be mainly classified into two types: one is based on the identification of the shape of the target edge, i.e. the edge feature. The second is based on shape recognition of the area covered by the object, i.e. area features.
The feature extraction of the target takes care of the following basic requirements:
(1) the extracted and selected features are insensitive to the variable parameters of the target;
(2) the characteristics are stable and easy to extract;
(3) the dimension of the characteristic quantity is obviously smaller than the original data of the target;
(4) the correlation between the characteristic quantities is required to be as small as possible;
(5) to improve the classification accuracy, some features are added that easily separate confusing classes.
Based on the feature extraction requirement, the invention extracts the mixed features of the vehicle (including the electric bicycle), and simultaneously comprises edge features and region features, namely extracts edge feature parameters and 5 region description feature parameters which are formed by 8 sub-coefficients of discrete cosine transform, 6 independent invariant moment parameters, and has 19 feature parameters in total.
Preferably, the sub-number operation process of the discrete cosine transform is as follows:
the target image is divided and preprocessed, then edge extraction is carried out, outline data f (xm, ym) is obtained, a closed edge curve composed of N points is placed on a complex plane to form a one-dimensional complex sequence,
f(m)=x(m)+jy(m);1≤m≤n,
(m) is a one-dimensional complex sequence; the formula is subjected to discrete cosine transform as follows:
Figure BDA0002312166020000062
calculating to obtain a discrete cosine transform sub-coefficient:
C(k)=|F(k)|/F(1);
wherein, C (k) is a sub-coefficient of discrete cosine transform; k is the number of the discrete sub coefficients, and k is 1,2 … 8; j is the imaginary part N of the complex plane 1,2,3 … N-1; n is a characteristic point variable of a closed edge curve obtained by performing edge extraction after image segmentation, and N is the number of characteristic points of the closed edge curve obtained by performing edge extraction after image segmentation.
The discrete cosine transform coefficient has translation, rotation and scale invariance to the target and is insensitive to the starting point of the profile data. The low-frequency part of the cosine transform coefficient reflects the overall outline of the image, the high-frequency part of the cosine transform coefficient only represents the details of the outline, and the discrete cosine transform does not need complex operation and data modulo operation, so that higher recognition rate can be obtained by using less characteristic quantity.
If a digital image satisfies piecewise continuity with only a limited number of zeros in the XY plane, the existence of moments in the digital image can be demonstrated.
For a binary image, since the pixel values are only 0 and 1, assuming that the pixel value of the target area is 1 and the pixel value of the background area is 0, the p + q order moment of the binary image is as shown in the formula:
Figure BDA0002312166020000063
the central moments of this region are:
Figure BDA0002312166020000064
wherein ,
Figure BDA0002312166020000071
is the coordinate of the center point of the region, mupqIs the central moment of the region in which the binarized image is located
Figure BDA0002312166020000072
m00Is the zero order geometric moment, m, of the region where the binary image is located01、m10Is the first-order geometric moment, m, of the region where the binary image is locatedpqThe image is a binary image in the region of the geometric moment of order p + q, p is the row order of the central moment of the binary image, and q is the column order of the central moment of the binary image.
The target identification characteristic parameters comprise: the eccentricity of a region, which is the ratio of the length of the major axis of the boundary to the length of the minor axis, i.e., the eccentricity of an ellipse having the same second moment as the region, is the ratio of the distance between the lengths of the major axes of the focal length of the ellipse. Ratio of minor axis to major axis of region, region area S, region perimeter L, and region compactness coefficient 4 π S/L2
Radial basis function neural networks are a new neural network learning method that expands or preprocesses input vectors into a high-dimensional space. The method has good popularization capability, and avoids tedious calculation such as BP algorithm, thereby realizing rapid learning of the neural network.
As shown in fig. 3, the present invention adopts an RBF neural network with a self-organizing center, and takes edge feature parameters and 5 area description feature parameters, which are composed of 8 extracted discrete cosine transform coefficients, 6 independent invariant moment parameters, as input vectors of the network. The discrimination result of whether the vehicle is recognized or not is requested. Therefore, the designed RBF neural network has 19-dimensional input neurons and 2 outputs.
The radial basis function neural network model is a three-layer neural network model: the first layer is an input layer, and the input of the feature vectors into the network is completed. The second layer is a hidden layer, which is completely connected with the input layer (the weight is 1), and is equivalent to performing a transformation on the input mode once, and transforming the mode input data of the low dimension into the high dimension space so as to help the output layer to perform classification and identification. The hidden layer node selects a Gaussian radial basis function as a transfer function. The third layer is an output layer, 2 output quantities are obtained by calculating the weight between the hidden layer and the output layer, and the vehicle target is identified.
The RBF neural network learning method of the self-organizing selection center mainly comprises two stages. Stage one: a self-organizing learning stage, namely a teacher-free learning process, solving hidden layer basis functions; and a second stage: there is a teacher learning stage, that is, the weights between the hidden layer and the output layer are solved.
The first layer is an input layer and finishes inputting the feature vectors into the network;
the second layer is a hidden layer and can be completely connected with the input layer, the hidden layer node selects a Gaussian radial basis function as a transfer function, and the calculation formula is as follows:
Figure BDA0002312166020000081
in the formula ,||xp-ciI is the European norm, ciIs the center of the Gaussian function, and sigma is the variance of the Gaussian function;
the third layer is an output layer, 2 output quantities are obtained by calculating the weight between the hidden layer and the output layer, and the vehicle target is identified; the output of the network, which can be derived from the structure of the radial basis function neural network, is:
Figure BDA0002312166020000082
in the formula ,
Figure BDA0002312166020000083
p is the P-th input sample, P is 1,2, …, P is the total number of samples; c. CiFor the centre of the hidden layer node of the network, omegaijThe connection weight from the hidden layer to the output layer is 1,2, …, h is the number of nodes in the hidden layer, yjJ is the actual output of the jth output node of the network for the input sample pair, j being 1,2, …, n;
Figure BDA0002312166020000084
wherein ,djIs the expected output value of the sample.
The center of the radial basis function neural network is obtained by a K-means clustering algorithm, and the specific process is as follows:
step one, randomly selecting h training samples as a clustering center ci,i=1,2,…h;
Step two, calculating the Euclidean distance between each training sample and the clustering center, and distributing each training sample to each clustering set psi of the input samples according to the Euclidean distancep(P ═ 1,2, …, P);
step three, recalculating each cluster set psipAverage value of middle training sample to obtain new clustering center ci′;
Step four, repeating the step two and the step three until a new clustering center ci' variation less than a given threshold, then c is obtainedi' is radial basis neural netAnd (4) complexing the final basis function center.
The basis function variance solution formula is:
Figure BDA0002312166020000085
wherein ,σiIs the variance of the basis function, cmaxThe maximum distance between the centers is chosen.
The connection weight between the hidden layer and the output layer is calculated by a least square method, and the calculation formula is as follows:
Figure BDA0002312166020000086
as shown in fig. 4, in order to verify the effectiveness of the ADAS system front target identification method based on the RBF neural network provided by the present invention, 850 vehicle samples and 850 electric bicycle samples are respectively established, the RBF neural network is trained, 60% positive and negative sample images are randomly selected to complete the test, and the RBF neural network identification accuracy can reach more than 94%. The error performance curve of the RBF neural network shows that the designed network meets the requirement of training errors.
The invention designs and develops a method for identifying the front target of an automobile safety auxiliary system based on a radial basis function neural network, which is used for extracting the edge of a preprocessed road condition image, searching the edge, acquiring a sensitive interest area as a possible vehicle area, and constructing a radial basis function neural network vehicle identifier by counting the edge characteristic and the area characteristic parameter of a vehicle, thereby realizing the classification of the vehicle and an electric bicycle in a detection area, greatly improving the accuracy of vehicle target identification, and reducing the false detection rate to increase the reliability of system identification.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable to various fields of endeavor with which the invention may be practiced, and further modifications may readily be effected therein by those skilled in the art, without departing from the general concept as defined by the claims and their equivalents, which are not limited to the details given herein and the examples shown and described herein.

Claims (9)

1. A method for identifying a front target of an automobile safety auxiliary system based on a radial basis function neural network is characterized by comprising the following steps:
acquiring a road condition image in front of an automobile, and carrying out segmentation pretreatment on the road condition image;
performing edge extraction on the preprocessed road condition image, and searching to obtain an interested area;
extracting the characteristics of the region of interest to obtain edge characteristics and area characteristics corresponding to the region of interest;
constructing a radial basis function neural network model by taking the edge characteristics and the region characteristics as input layer vectors, and analyzing the input layer vector characteristics in the neural network to obtain output quantity related to the target;
obtaining a corresponding vehicle target according to the output quantity, and outputting the vehicle target as a recognition result;
the region of interest includes electric bicycles and motor vehicles.
2. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 1, wherein the vehicle edge characteristics and the area characteristics corresponding to the area of interest comprise: the edge characteristic parameter and the region description characteristic parameter are formed by independent invariant moment parameters of the sub-numbers of the discrete cosine transform.
3. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 2, wherein the sub-number calculation formula of the discrete cosine transform is as follows:
C(k)=|F(k)|/F(1);
wherein C (k) is the number of discrete cosine transform subsystems,
Figure FDA0002312166010000011
k is the number of the discrete sub coefficients, and k is 1,2 … 8; f (k) ═ x (k) + jy (k);
Figure FDA0002312166010000012
Figure FDA0002312166010000013
j is the imaginary part N of the complex plane 1,2,3 … N-1; n is a variable of a feature point of a closed edge curve obtained by performing edge extraction after image segmentation, N is the number of the feature points of the closed edge curve obtained by performing edge extraction after image segmentation, and f (m) ═ x (m) + jy (m); m is more than or equal to 1 and less than or equal to n, and f (m) is a one-dimensional complex sequence.
4. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 2, wherein the independent moment-invariant parameter calculation formula is as follows:
Figure FDA0002312166010000021
wherein ,
Figure FDA0002312166010000022
Figure FDA0002312166010000023
is the coordinate of the center point of the region, mupqIs the central moment of the region of the binary image
Figure FDA0002312166010000024
m00Is the zero order geometric moment, m, of the region where the binary image is located01、m10Is the first-order geometric moment, m, of the region where the binary image is locatedpqThe image is a binary image in the region of the geometric moment of order p + q, p is the row order of the central moment of the binary image, and q is the column order of the central moment of the binary image.
5. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 2, wherein the target identification characteristic parameters comprise: region eccentricity, ratio of short axis to long axis of region, region area, region perimeter, and region compactness factor.
6. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 5, wherein the radial basis function neural network model is a three-layer neural network model:
the first layer is an input layer and finishes inputting the feature vectors into the network;
the second layer is a hidden layer and can be completely connected with the input layer, the hidden layer node selects a Gaussian radial basis function as a transfer function, and the calculation formula is as follows:
Figure FDA0002312166010000025
in the formula ,||xp-ciI is the European norm, ciIs the center of the Gaussian function, and sigma is the variance of the Gaussian function;
the third layer is an output layer, 2 output quantities are obtained by calculating the weight between the hidden layer and the output layer, and the vehicle target is identified; the output of the network, which can be derived from the structure of the radial basis function neural network, is:
Figure FDA0002312166010000026
in the formula ,
Figure FDA0002312166010000027
p is the P-th input sample, P is 1,2, …, P is the total number of samples; c. CiFor the centre of the hidden layer node of the network, omegaijThe connection weight from the hidden layer to the output layer is 1,2, …, h is the number of nodes in the hidden layer, yjThe actual output of the jth output node of the network for the input sample pair, j ═1,2,…,n;
Figure FDA0002312166010000031
wherein ,djIs the expected output value of the sample.
7. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 6, wherein the center of the radial basis function neural network is obtained by a K-means clustering algorithm, and the specific process is as follows:
step one, randomly selecting h training samples as a clustering center ci,i=1,2,…h;
Step two, calculating the Euclidean distance between each training sample and the clustering center, and distributing each training sample to each clustering set psi of the input samples according to the Euclidean distancep(P ═ 1,2, …, P);
step three, recalculating each cluster set psipAverage value of middle training sample to obtain new clustering center ci′;
Step four, repeating the step two and the step three until a new clustering center ciIf the variation of' is less than a given threshold, c is obtainedi' is the final basis function center of the radial basis function neural network.
8. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 6, wherein the solution formula of the variance of the basis function is as follows:
Figure FDA0002312166010000032
wherein ,σiIs the variance of the basis function, cmaxThe maximum distance between the centers is chosen.
9. The method for identifying the front target of the automobile safety auxiliary system based on the radial basis function neural network as claimed in claim 6, wherein the connection weight between the hidden layer and the output layer is calculated by a least square method, and the calculation formula is as follows:
Figure FDA0002312166010000033
CN201911263299.9A 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network Active CN110991377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911263299.9A CN110991377B (en) 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911263299.9A CN110991377B (en) 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network

Publications (2)

Publication Number Publication Date
CN110991377A true CN110991377A (en) 2020-04-10
CN110991377B CN110991377B (en) 2023-09-19

Family

ID=70092204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911263299.9A Active CN110991377B (en) 2019-12-11 2019-12-11 Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network

Country Status (1)

Country Link
CN (1) CN110991377B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862608A (en) * 2020-07-22 2020-10-30 湖北文理学院 Vehicle driving road condition identification method, device, equipment and storage medium
CN112558510A (en) * 2020-10-20 2021-03-26 山东亦贝数据技术有限公司 Intelligent networking automobile safety early warning system and early warning method
CN114092701A (en) * 2021-12-04 2022-02-25 特斯联科技集团有限公司 Intelligent symbol identification method based on neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814160A (en) * 2010-03-08 2010-08-25 清华大学 RBF neural network modeling method based on feature clustering
CN103020582A (en) * 2012-09-20 2013-04-03 苏州两江科技有限公司 Method for computer to identify vehicle type by video image
CN106056147A (en) * 2016-05-27 2016-10-26 大连楼兰科技股份有限公司 System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
CN106960075A (en) * 2017-02-27 2017-07-18 浙江工业大学 The Forecasting Methodology of the injector performance of RBF artificial neural network based on linear direct-connected method
WO2018187953A1 (en) * 2017-04-12 2018-10-18 邹霞 Facial recognition method based on neural network
CN110261436A (en) * 2019-06-13 2019-09-20 暨南大学 Rail deformation detection method and system based on infrared thermal imaging and computer vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814160A (en) * 2010-03-08 2010-08-25 清华大学 RBF neural network modeling method based on feature clustering
CN103020582A (en) * 2012-09-20 2013-04-03 苏州两江科技有限公司 Method for computer to identify vehicle type by video image
CN106056147A (en) * 2016-05-27 2016-10-26 大连楼兰科技股份有限公司 System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
CN106960075A (en) * 2017-02-27 2017-07-18 浙江工业大学 The Forecasting Methodology of the injector performance of RBF artificial neural network based on linear direct-connected method
WO2018187953A1 (en) * 2017-04-12 2018-10-18 邹霞 Facial recognition method based on neural network
CN110261436A (en) * 2019-06-13 2019-09-20 暨南大学 Rail deformation detection method and system based on infrared thermal imaging and computer vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张小军 等: "基于径向基函数神经网络的车型识别技术" *
沈凤龙;毕娟;: "基于多神经网络分类器的汽车车型识别方法研究" *
袁艳;叶俊浩;苏丽娟;: "基于改进的粒子群径向基神经网络的目标识别" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862608A (en) * 2020-07-22 2020-10-30 湖北文理学院 Vehicle driving road condition identification method, device, equipment and storage medium
CN111862608B (en) * 2020-07-22 2022-06-24 湖北文理学院 Vehicle driving road condition identification method, device, equipment and storage medium
CN112558510A (en) * 2020-10-20 2021-03-26 山东亦贝数据技术有限公司 Intelligent networking automobile safety early warning system and early warning method
CN114092701A (en) * 2021-12-04 2022-02-25 特斯联科技集团有限公司 Intelligent symbol identification method based on neural network

Also Published As

Publication number Publication date
CN110991377B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
Zhou et al. Split depth-wise separable graph-convolution network for road extraction in complex environments from high-resolution remote-sensing images
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN107292291B (en) Vehicle identification method and system
CN110263786B (en) Road multi-target identification system and method based on feature dimension fusion
CN110991377B (en) Front mesh identification method of automobile safety auxiliary system based on monocular vision neural network
CN112749616B (en) Multi-domain neighborhood embedding and weighting of point cloud data
JP2016062610A (en) Feature model creation method and feature model creation device
Wang et al. Probabilistic inference for occluded and multiview on-road vehicle detection
Kuang et al. Feature selection based on tensor decomposition and object proposal for night-time multiclass vehicle detection
Guindel et al. Fast joint object detection and viewpoint estimation for traffic scene understanding
CN112990065B (en) Vehicle classification detection method based on optimized YOLOv5 model
CN108960074B (en) Small-size pedestrian target detection method based on deep learning
CN115937655B (en) Multi-order feature interaction target detection model, construction method, device and application thereof
CN105989334A (en) Monocular vision-based road detection method
CN111860269A (en) Multi-feature fusion tandem RNN structure and pedestrian prediction method
Lin et al. Application research of neural network in vehicle target recognition and classification
Seeger et al. Towards road type classification with occupancy grids
Thubsaeng et al. Vehicle logo detection using convolutional neural network and pyramid of histogram of oriented gradients
Al Mamun et al. Lane marking detection using simple encode decode deep learning technique: SegNet
CN114913498A (en) Parallel multi-scale feature aggregation lane line detection method based on key point estimation
Barodi et al. An enhanced artificial intelligence-based approach applied to vehicular traffic signs detection and road safety enhancement
Liang et al. Car detection and classification using cascade model
CN114821519A (en) Traffic sign identification method and system based on coordinate attention
CN112559968A (en) Driving style representation learning method based on multi-situation data
CN101996315A (en) System, method and program product for camera-based object analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant