CN103279759B - A kind of vehicle front trafficability analytical procedure based on convolutional neural networks - Google Patents

A kind of vehicle front trafficability analytical procedure based on convolutional neural networks Download PDF

Info

Publication number
CN103279759B
CN103279759B CN201310234126.0A CN201310234126A CN103279759B CN 103279759 B CN103279759 B CN 103279759B CN 201310234126 A CN201310234126 A CN 201310234126A CN 103279759 B CN103279759 B CN 103279759B
Authority
CN
China
Prior art keywords
image
gray
pixel
layer
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310234126.0A
Other languages
Chinese (zh)
Other versions
CN103279759A (en
Inventor
李琳辉
连静
王蒙蒙
丁新立
宗云鹏
化玉伟
王宏旭
常静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201310234126.0A priority Critical patent/CN103279759B/en
Publication of CN103279759A publication Critical patent/CN103279759A/en
Application granted granted Critical
Publication of CN103279759B publication Critical patent/CN103279759B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention discloses a kind of vehicle front trafficability analytical procedure based on convolutional neural networks, comprises the following steps: first by being arranged on the camera acquisition real vehicle running environment image in a large number of vehicle front; Utilize Gamma to correct function and image is carried out pre-treatment; Carry out the training of convolutional neural networks. The present invention adopts the Gamma antidote pretreatment image of nonlinear function superposition, avoid the impact of illumination on Target Recognition of strong variations, it is to increase image resolution rate. Present invention employs geometry method for normalizing, reduce the differences in resolution identifying that target distance pick up camera distance causes. The convolutional neural networks LeNet-5 that the present invention adopts can extract the implicit features with classification resolving power, and leaching process is simple; LeNet-5 is wild in conjunction with office's territory impression, weights are shared and secondary sampling, it is ensured that to the robustness of simple geometry distortion, and decreases the training parameter of network, simplifies network structure.

Description

A kind of vehicle front trafficability analytical procedure based on convolutional neural networks
Technical field
The invention belongs to safety assistant driving and technical field of intelligent traffic, relate to vehicle front trafficability analytical procedure, it is related specifically to a kind of by camera acquisition vehicle front video image, based on the vehicle front trafficability analytical procedure of convolutional neural networks.
Background technology
Vehicle front trafficability analysis belongs to intelligent transportation field towards the environment sensing category outside car, refer to and based on advanced meanses such as sensor technology, computer technology or communication technology, the driving safety of institute's acquisition environment is analyzed, find out the potential safety hazard of existence, send prompting and early warning to officer or lay the foundation for unmanned automobile navigation. At present, based on camera acquisition vehicle front video image information, the research adopting visual pattern understanding method to carry out trafficability analysis mainly contains detection of obstacles, pedestrian detection, vehicle detection, Road Detection, road traffic sign detection, topography and geomorphology classification etc.
Visual pattern understanding method involved by trafficability analysis can be divided into based on the method rebuild with based on knowledge method for distinguishing. Wherein, based on the method rebuild based on three-dimensional or 2.5 dimension reconstruction techniques, judge from the angle in space whether passing through, it is difficult to the serious polysemy of avoiding three-dimensional reconstruction intrinsic, reconstruction scope be less and the problem such as poor real. Based in the image understanding method identified, mainly contain the algorithm based on modeling and template matches, generally have neural network, SVMs, certainly supervise the method etc. of study, Corpus--based Method study, these methods need the explicit features extracting target, leaching process is complicated, easily cause important information dropout, environmental adaptation ability.
For the structured road environment that illumination variation is strong, if directly identified by original image, interfere information is many, and the process that explicit features is extracted is complicated, moreover target is apart from the different difference that can cause resolving power of pick up camera distance. In addition, the change of illumination can affect picture quality, reduces the resolving power of image.
Summary of the invention
For solving the problems referred to above that prior art exists, the present invention to be proposed a kind of vehicle front trafficability analytical procedure based on convolutional neural networks, the method can extract the implicit features of target, leaching process is simple, avoid the resolving power reducing image, and the impact of illumination can be reduced, it is applicable to the structured road environment that illumination variation is strong.
The technical scheme of the present invention is: a kind of vehicle front trafficability analytical procedure based on convolutional neural networks, comprises the following steps:
A, image collection
First camera acquisition real vehicle running environment image in a large number by being arranged on vehicle front, the region then obtaining image bottom 3/5ths by cutting is as area-of-interest; Finally the image after cutting is converted into gray-scale map picture.
B, Image semantic classification
B1, utilizing method construct Gamma of nonlinear function superposition to correct function, corrected by the gray-scale map picture that steps A obtains, concrete function formula is as follows:
G (x)=1+f1(x)+f2(x)+f3(x)(1)
f1(x)=acos (�� x/255) (2)
f2(x)=(K (x)+b) cos ��+xsin �� (3)
K (x)=�� sin (4 �� x/255) (4)
��=arctan (-2b/255) (5)
f3(x)=R (x) cos (3 �� x/255) (6)
R (x)=c | 2x/255-1 | (7)
In formula, x is the gray-scale value of a certain pixel, and G (x) represents Gamma correction value corresponding to a certain gray-scale value, and a �� (0,1) is a weighting coefficient, and b represents f2X the maximum changing range of (), �� represents the amplitude of K (x), and �� represents the deflection angle of K (x), and c represents the amplitude of R (x), and meets a+b+c < 1.
Gray-scale value calculation formula after Gamma corrects is:
G (x)=255 (x/255)1/G(x)(8)
In formula, g (x) represents the gray-scale value of a certain pixel after Gamma corrects.
Correct through Gamma, obtain gray-scale map as P.
B2, for gray-scale map as P, change the gray-scale value of some pixel, concrete change method is as follows:
Choosing in image gray-scale value in the image-region except vehicle and road boundary is the pixel of 0, changes its gray-scale value into 1, and choosing in image gray-scale value in the image-region except vehicle and road boundary is the pixel of 255, changes its gray-scale value into 254; Changing the gray-scale value of vehicle region pixel in image into 0, the gray-scale value of road boundary area pixel point changes 255 into, and changing the image after pixel is that gray-scale map is as Q. So far, gray-scale map comprises three classes as the pixel of Q: the first kind is gray-scale value is the pixel of 0, represents vehicle; 2nd class is gray-scale value is the pixel of 255, represents road boundary; 3rd class is that to remove gray-scale value be the pixel outside 0 and 255, represents road surface. Give corresponding label respectively by above three class pixels, it is assigned to first kind pixel by label " 0 ", represent " vehicle ", label " 1 " is assigned to the 2nd class pixel, represent " road boundary ", label " 2 " is assigned to the 3rd class pixel, represent on " road surface ". Finally gray-scale map is assigned to gray-scale map as pixel corresponding in P as the label of each pixel in Q.
B3, it is normalized for the size of gray-scale map as P:
B31, along image height direction, different pixel columns is chosen at interval, represents with n, is measured by actual samples, obtains pixel wide and the height of target corresponding to different pixels row n;
B32, image-region taking pixels tall in image as 0��32 are for reference picture region, and the pixel wide W of the target identified in this reference picture region and height H, as benchmark, namely establish the horizontal and vertical cutting scale-up factor in this reference picture region to be 1; Remove width and the height of target on all the other each pixel columns with W and H respectively, obtain two groups of ratios, represent with Y and Z respectively;
B33, finally pixel column n and two group of ratio Y and Z is carried out matching respectively, obtain two matched curves, as follows:
Y=k1n+b1(9)
Z=k2n+b2(10)
Wherein, the horizontal cutting scale-up factor of Y representative image, the longitudinal cutting scale-up factor of Z representative image,nThe a certain pixel column of representative image, k1��k2Represent the slope of two matched curves respectively, b1��b2Represent the intercept of two matched curves respectively.
B34, horizontal and vertical cutting ratio with reference to image-region are all set to 1, are cut to the image pattern of 32 �� 32 pixels by reference picture region. Along with the increase of n, according to the horizontal and vertical cutting size also corresponding increase that formula (9), (10) obtain. By cutting, obtain a series of image pattern not of uniform size, the unified image being scaled 32 �� 32 pixels of the image pattern finally cutting obtained. Using learning sample as convolutional neural networks of the image of 32 �� 32 pixels that obtains.
The training of C, convolutional neural networks
Typical case convolutional neural networks LeNet-5 forms by 8 layers, and input layer is 32 �� 32 pixel images; Network layer C1, C3 and C5 represent convolutional layer respectively, and network layer S2 and S4 is time sampling layer, and network layer F5 is full articulamentum, and the neuronic number of output layer is identical with the target classification number to be identified, changes according to actual application environment. A face of every layer of network represents a characteristic pattern, the set that the neurone that this characteristic pattern is shared by weights in each layer forms. The neurone that the neurone of each layer only experiences wild with an office territory of last layer is connected.
Jth neurone in convolutional neural networks the l convolutional layerCan represent and be:
N j l = f ( &Sigma; i &Element; M j M i l - 1 * k i j l + O j l ) - - - ( 11 )
In formula, subscript l �� 1,3,5} represents the number of plies, subscript i, j=1,2,3 ... for natural number, for representing the neurone sequence number of l or l-1 layer, N represents neurone, and k is convolution core, MjRepresenting a selection of input feature vector figure, O represents biased.
Jth neurone in convolutional neural networks l sampling layerCan represent and be:
N j l = f ( &beta; j l d o w n ( N j l - 1 ) + O j l ) - - - ( 12 )
In formula, { 2,4} represents the number of plies to subscript l ��, subscript j=1,2,3 ... for natural number, for representing the neurone sequence number of l or l-1 layer, N represents neurone, and down () represents time sampling function, is generally the region summation of a n �� n to a front tomographic image, �� represents time weights of sampling layer, and O represents biased.
According to actual application environment, the number of the output neuron of LeNet-5 is adjusted, then adopt step B obtain Pixel Dimensions be 32 �� 32 image pattern train. By training, when the output value of convolutional neural networks and the error of expected value are in tolerance interval, just obtain can be used for the convolutional neural networks of vehicle front trafficability analysis.
Compared with prior art, the effect of the present invention and benefit are:
1, the present invention adopts the Gamma antidote pretreatment image of nonlinear function superposition, avoid the impact of illumination on Target Recognition of strong variations, it is to increase image resolution rate.
2, present invention employs geometry method for normalizing, reduce the differences in resolution identifying that target distance pick up camera distance causes.
3, the convolutional neural networks LeNet-5 that the present invention adopts can extract the implicit features with classification resolving power, and leaching process is simple; LeNet-5 is wild in conjunction with office's territory impression, weights are shared and secondary sampling, it is ensured that to the robustness of simple geometry distortion, and decreases the training parameter of network, simplifies network structure; The neuron number of LeNet-5 output layer can adjust according to actual application environment, and environmental adaptation ability is strong.
Accompanying drawing explanation
The present invention has 3, accompanying drawing, wherein:
Fig. 1 is based on the vehicle front trafficability analytical procedure schema of convolutional neural networks
Fig. 2 convolutional neural networks LeNet-5 structure iron.
The sample set of Fig. 3 convolutional neural networks training.
Embodiment
Below in conjunction with accompanying drawing, the present invention is further described. It is illustrated in figure 1 the schema of the vehicle front trafficability analytical procedure based on convolutional neural networks. Environment before car, for the structured environment of motorway, is divided into vehicle, road boundary and road surface by the present invention.
The analysis process of the present invention comprises: the training of image collection, Image semantic classification, convolutional neural networks.
A, image collection
By being arranged on a large amount of real highway driving ambient image (640 �� 480 pixel) of the camera acquisition of vehicle front, then using 3/5ths parts under image as area-of-interest (640 �� 288 pixel), to reduce follow-up workload; Finally the image after cutting is converted into gray-scale map picture.
B, Image semantic classification
The first step, Gamma antidote has certain advantage in minimizing illumination effect. Generally, when Gamma value is greater than 1, the high light part of image is compressed and shadow part is expanded; When Gamma value is less than 1, the high light part of image is expanded and shadow part is compressed. Utilizing method construct Gamma of nonlinear function superposition to correct function, gray-scale map picture steps A obtained is corrected, and function formula is as follows:
G (x)=1+f1(x)+f2(x)+f3(x)(1)
f1(x)=acos (�� x/255) (2)
f2(x)=(K (x)+b) cos ��+xsin �� (3)
K (x)=�� sin (4 �� x/255) (4)
��=arctan (-2b/255) (5)
f3(x)=R (x) cos (3 �� x/255) (6)
R (x)=c | 2x/255-1 | (7)
In formula, x is the gray-scale value of a certain pixel, and G (x) represents Gamma correction value corresponding to a certain gray-scale value, and a �� (0,1) is a weighting coefficient, and b represents f2X the maximum changing range of (), �� represents the amplitude of K (x), and �� represents the deflection angle of K (x), and c represents the amplitude of R (x), and meets a+b+c < 1. Generally, a < b��c, desirable a=0.2, b=0.3, c=0.3.
Gray-scale value calculation formula after Gamma corrects is:
G (x)=255 (x/255)1/G(x)(8)
In formula, g (x) represents the gray-scale value of a certain pixel after Gamma corrects.
Correct through Gamma, obtain gray-scale map as P.
2nd step, for the gray-scale map corrected through Gamma as P, choosing in image the gray-scale value except vehicle and road boundary is the pixel of 0, the gray-scale value of these pixels is changed into 1 by programming, choosing in image the gray-scale value except vehicle and road boundary is the pixel of 255, changes the gray-scale value of these pixels into 254 by programming; Then changing the gray-scale value of vehicle region pixel in image into 0 by programming, the gray-scale value of road boundary area pixel point changes 255 into, obtains gray-scale map as Q. So far, gray-scale map comprises three classes as the pixel of Q: the first kind is gray-scale value is the pixel of 0, represents vehicle; 2nd class is gray-scale value is the pixel of 255, represents road boundary; 3rd class is that to remove gray-scale value be the pixel outside 0 and 255, represents road surface. Give corresponding label respectively by being programmed for above three class pixels, it is assigned to first kind pixel by label " 0 ", represent " vehicle ", label " 1 " is assigned to the 2nd class pixel, represent " road boundary ", label " 2 " is assigned to the 3rd class pixel, represent on " road surface ". Finally the label of each pixel in image Q is assigned to gray-scale map as the respective pixel point in P by programming.
3rd step, is normalized for the size of gray-scale map as P:
In the picture, the number of pixels shared by same target is very big apart from the impact of pick up camera distance by himself, and namely number of pixels shared by either objective is inversely proportional to the distance of itself and pick up camera. The present invention proposes a kind of new geometry method for normalizing, and the size for gray-scale map picture is normalized, and avoids the resolving power reducing image, reduces the differences in resolution identifying that target distance pick up camera distance causes. First, on image height, different pixel columns is chosen at interval, represent with n, n=15,32,60,65,68,72,75,82,85,87,92,96,100,108,111,113,124,130,138,143,150,160, measured by actual samples, obtain pixel wide and the height of target corresponding to different pixels row; Secondly, in image, the image-region of pixels tall as 0��32 is reference, and in this reference picture region, the pixel wide W of target and height H are as benchmark, and namely the horizontal and vertical cutting scale-up factor in this reference picture region is 1; Finally, width and the height removing target on all the other each pixel columns respectively with W and H, obtains two groups of ratios, represents with Y and Z respectively, then by programming, pixel column n data and Y and Z data are carried out matching respectively, obtain two matched curves, as follows:
Y=0.0312n-0.8339 (9)
Z=0.0360n-1.0590 (10)
Wherein, the horizontal cutting scale-up factor of Y representative image, the longitudinal cutting scale-up factor of Z representative image, a certain pixel column of n representative image.
The horizontal and vertical cutting ratio in reference picture region is 1, is cut to the image pattern of 32 �� 32 pixels by reference picture region. Along with the increase of n, according to the horizontal and vertical cutting size also corresponding increase that formula (9), (10) obtain. By cutting, obtaining a series of image pattern not of uniform size, the image pattern finally cutting obtained is by the unified image being scaled 32 �� 32 pixels of programming. Using learning sample as convolutional neural networks of the image of 32 �� 32 pixels that obtains.
The training of C, convolutional neural networks
The structure that the present invention adopts typical convolutional neural networks LeNet-5 is as shown in Figure 2. Typical case convolutional neural networks LeNet-5 forms by 8 layers, and input picture is 32 �� 32 pixels; Network layer C1, C3, C5 represent convolutional layer, and network layer S2, S4 are time sampling layer, and network layer F5 is full articulamentum, and the neuronic number of output layer is identical with the target classification number to be identified, it is possible to change according to actual application environment. A face of every layer of network represents a characteristic pattern, the set that the neurone that this characteristic pattern is shared by weights in each layer forms. The neurone that the neurone of each layer only experiences wild with an office territory of last layer (counting from input layer) is connected.
The characteristic pattern that convolutional layer C1 is 28 �� 28 by 6 sizes forms, and each neurone of characteristic pattern is connected with the neighborhood of 5 �� 5 of input picture, and convolutional layer C1 comprises 156 training parameter and 122304 can train connection. The characteristic pattern that secondary sampling layer S2 is 14 �� 14 by 6 sizes forms, and the neighborhood that each neurone of characteristic pattern is 2 �� 2 with size in convolutional layer C1 is connected, and secondary sampling layer S2 has 12 training parameter and 5880 can train connection. The characteristic pattern that convolutional layer C3 is 10 �� 10 by 16 sizes forms, and each neurone of characteristic pattern is connected with time neighborhood of sampling one 5 �� 5 of layer S2, and convolutional layer C3 includes 1516 training parameter and 151600 can train connection. The characteristic pattern that secondary sampling layer S4 is 5 �� 5 by 16 sizes forms, and the neighborhood that each neurone of characteristic pattern is 2 �� 2 with a size of convolutional layer C3 is connected, and secondary sampling layer S4 includes 32 training parameter and 2000 can train connection. Convolutional layer C5 is made up of 120 characteristic patterns, and each neurone of characteristic pattern is connected with time neighborhood of 5 �� 5 sizes of the sampling all characteristic patterns of layer S4, and convolutional layer C5 includes 48120 training parameter and 48120 can train connection. Network layer F6 is connected entirely with convolutional layer C5, and full articulamentum F6 comprises 10164 can training parameter. Output layer is made up of RBF unit.
Jth neurone in convolutional neural networks the l convolutional layerCan represent and be:
N j l = f ( &Sigma; i &Element; M j N i l - 1 * k i j l + O j l ) - - - ( 11 )
In formula, f () represents activation function, subscript l �� 1,3,5} represents the number of plies, subscript i, j=1,2,3 ... for natural number, for representing the neurone sequence number of l or l-1 layer, N represents neurone, and k is convolution core, MjRepresenting a selection of input feature vector figure, O represents biased.
Jth neurone in convolutional neural networks l sampling layerCan represent and be:
N j l = f ( &beta; j l d o w n ( N j l - 1 ) + O j l ) - - - ( 12 )
In formula, f () represents activation function, subscript l �� { 2,4} represents the number of plies, subscript j=1,2,3, for natural number, for representing the neurone sequence number of l or l-1 layer, N represents neurone, down () represents time sampling function, being one to a front tomographic image square region summation, �� represents time weights of sampling layer, and O represents biased.
The present invention is for highway environment, before car, environment is divided into road surface, vehicle and road boundary three part, therefore the output neuron number of LeNe-5 should be decided to be 3, exports 0 representative and identifies that target is vehicle, export 1 representative and identify that target is road boundary, export 2 representatives and identify that target is road surface. The sample set size of network training is 5000, and part sample is as shown in Figure 3. Choosing of network initial weight adopts random approach to produce. With this kind of known classification mode, convolution network is trained, network just possessed input and output between mapping ability. By training, if the error of the output value and expected value that meet convolutional neural networks is in tolerance interval, just obtain the convolutional neural networks that can be used for vehicle front trafficability and analyze.

Claims (1)

1. the vehicle front trafficability analytical procedure based on convolutional neural networks, it is characterised in that: comprise the following steps:
A, image collection
First camera acquisition real vehicle running environment image in a large number by being arranged on vehicle front, the region then obtaining image bottom 3/5ths by cutting is as area-of-interest; Finally the image after cutting is converted into gray-scale map picture;
B, Image semantic classification
B1, utilizing method construct Gamma of nonlinear function superposition to correct function, corrected by the gray-scale map picture that steps A obtains, concrete function formula is as follows:
G (x)=1+f1(x)+f2(x)+f3(x)(1)
f1(x)=acos (�� x/255) (2)
f2(x)=(K (x)+b) cos ��+xsin �� (3)
K (x)=�� sin (4 �� x/255) (4)
��=arctan (-2b/255) (5)
f3(x)=R (x) cos (3 �� x/255) (6)
R (x)=c | 2x/255-1 | (7)
In formula, x is the gray-scale value of a certain pixel, and G (x) represents Gamma correction value corresponding to a certain gray-scale value, and a �� (0,1) is a weighting coefficient, and b represents f2The maximum changing range of (x), �� represents the amplitude of K (x), and �� represents the deflection angle of K (x), and c represents the amplitude of R (x), and meets a+b+c < 1;
Gray-scale value calculation formula after Gamma corrects is:
G (x)=255 (x/255)1/G(x)(8)
In formula, g (x) represents the gray-scale value of a certain pixel after Gamma corrects;
Correct through Gamma, obtain gray-scale map as P;
B2, for gray-scale map as P, change the gray-scale value of some pixel, concrete change method is as follows:
Choosing in image gray-scale value in the image-region except vehicle and road boundary is the pixel of 0, changes its gray-scale value into 1, and choosing in image gray-scale value in the image-region except vehicle and road boundary is the pixel of 255, changes its gray-scale value into 254; Changing the gray-scale value of vehicle region pixel in image into 0, the gray-scale value of road boundary area pixel point changes 255 into, and changing the image after pixel is that gray-scale map is as Q; So far, gray-scale map comprises three classes as the pixel of Q: the first kind is gray-scale value is the pixel of 0, represents vehicle; 2nd class is gray-scale value is the pixel of 255, represents road boundary; 3rd class is that to remove gray-scale value be the pixel outside 0 and 255, represents road surface; Give corresponding label respectively by above three class pixels, it is assigned to first kind pixel by label " 0 ", represent " vehicle ", label " 1 " is assigned to the 2nd class pixel, represent " road boundary ", label " 2 " is assigned to the 3rd class pixel, represent on " road surface "; Finally gray-scale map is assigned to gray-scale map as pixel corresponding in P as the label of each pixel in Q;
B3, it is normalized for the size of gray-scale map as P:
B31, along image height direction, different pixel columns is chosen at interval, represents with n, is measured by actual samples, obtains pixel wide and the height of target corresponding to different pixels row n;
B32, image-region taking pixels tall in image as 0��32 are for reference picture region, and the pixel wide W of the target identified in this reference picture region and height H, as benchmark, namely establish the horizontal and vertical cutting scale-up factor in this reference picture region to be 1; Remove width and the height of target on all the other each pixel columns with W and H respectively, obtain two groups of ratios, represent with Y and Z respectively;
B33, finally pixel column n and two group of ratio Y and Z is carried out matching respectively, obtain two matched curves, as follows:
Y=k1n+b1(9)
Z=k2n+b2(10)
Wherein, the horizontal cutting scale-up factor of Y representative image, the longitudinal cutting scale-up factor of Z representative image, a certain pixel column of x representative image, k1��k2Represent the slope of two matched curves respectively, b1��b2Represent the intercept of two matched curves respectively;
B34, horizontal and vertical cutting ratio with reference to image-region are all set to 1, are cut to the image pattern of 32 �� 32 pixels by reference picture region; Along with the increase of n, according to the horizontal and vertical cutting size also corresponding increase that formula (9), (10) obtain; By cutting, obtain a series of image pattern not of uniform size, the unified image being scaled 32 �� 32 pixels of the image pattern finally cutting obtained; Using learning sample as convolutional neural networks of the image of 32 �� 32 pixels that obtains;
The training of C, convolutional neural networks
Typical case convolutional neural networks LeNet-5 forms by 8 layers, and input layer is 32 �� 32 pixel images; Network layer C1, C3 and C5 represent convolutional layer respectively, and network layer S2 and S4 is time sampling layer, and network layer F5 is full articulamentum, and the neuronic number of output layer is identical with the target classification number to be identified, changes according to actual application environment; A face of every layer of network represents a characteristic pattern, the set that the neurone that this characteristic pattern is shared by weights in each layer forms; The neurone that the neurone of each layer only experiences wild with an office territory of last layer is connected;
The general type of convolutional layer is:
Jth neurone in convolutional neural networks the l convolutional layerCan represent and be:
In formula, subscript l �� 1,3,5} represents the number of plies, subscript i, j=1,2,3 ... for natural number, for representing the neurone sequence number of l or l-1 layer, N represents neurone, and k is convolution core, MjRepresenting a selection of input feature vector figure, O represents biased;
Jth neurone in convolutional neural networks l sampling layerCan represent and be:
In formula, { 2,4} represents the number of plies to subscript l ��, subscript j=1,2,3 ... for natural number, for representing the neurone sequence number of l or l-1 layer, N represents neurone, and down () represents time sampling function, is the region summation of a n �� n to a front tomographic image, �� represents time weights of sampling layer, and O represents biased;
According to actual application environment, the number of the output neuron of LeNet-5 is adjusted, then adopt step B obtain Pixel Dimensions be 32 �� 32 image pattern train; By training, when the output value of convolutional neural networks and the error of expected value are in tolerance interval, just obtain can be used for the convolutional neural networks of vehicle front trafficability analysis.
CN201310234126.0A 2013-06-09 2013-06-09 A kind of vehicle front trafficability analytical procedure based on convolutional neural networks Expired - Fee Related CN103279759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310234126.0A CN103279759B (en) 2013-06-09 2013-06-09 A kind of vehicle front trafficability analytical procedure based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310234126.0A CN103279759B (en) 2013-06-09 2013-06-09 A kind of vehicle front trafficability analytical procedure based on convolutional neural networks

Publications (2)

Publication Number Publication Date
CN103279759A CN103279759A (en) 2013-09-04
CN103279759B true CN103279759B (en) 2016-06-01

Family

ID=49062274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310234126.0A Expired - Fee Related CN103279759B (en) 2013-06-09 2013-06-09 A kind of vehicle front trafficability analytical procedure based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN103279759B (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544506B (en) * 2013-10-12 2017-08-08 Tcl集团股份有限公司 A kind of image classification method and device based on convolutional neural networks
CN104680508B (en) * 2013-11-29 2018-07-03 华为技术有限公司 Convolutional neural networks and the target object detection method based on convolutional neural networks
CN104809426B (en) * 2014-01-27 2019-04-05 日本电气株式会社 Training method, target identification method and the device of convolutional neural networks
CN105224963B (en) * 2014-06-04 2019-06-07 华为技术有限公司 The method and terminal of changeable deep learning network structure
CN104063719B (en) * 2014-06-27 2018-01-26 深圳市赛为智能股份有限公司 Pedestrian detection method and device based on depth convolutional network
CN104077577A (en) * 2014-07-03 2014-10-01 浙江大学 Trademark detection method based on convolutional neural network
US20160026912A1 (en) * 2014-07-22 2016-01-28 Intel Corporation Weight-shifting mechanism for convolutional neural networks
CN104182756B (en) * 2014-09-05 2017-04-12 大连理工大学 Method for detecting barriers in front of vehicles on basis of monocular vision
CN107004138A (en) * 2014-12-17 2017-08-01 诺基亚技术有限公司 Utilize the object detection of neutral net
CN107430677B (en) 2015-03-20 2022-04-12 英特尔公司 Target identification based on improving binary convolution neural network characteristics
CN104850864A (en) * 2015-06-01 2015-08-19 深圳英智源智能系统有限公司 Unsupervised image recognition method based on convolutional neural network
CN104992179A (en) * 2015-06-23 2015-10-21 浙江大学 Fine-grained convolutional neural network-based clothes recommendation method
CN106874296B (en) * 2015-12-14 2021-06-04 阿里巴巴集团控股有限公司 Method and device for identifying style of commodity
CN114612877A (en) * 2016-01-05 2022-06-10 御眼视觉技术有限公司 System and method for estimating future path
CN105691381B (en) * 2016-03-10 2018-04-27 大连理工大学 A kind of four motorized wheels electric automobile stability control method and system
CN105975915B (en) * 2016-04-28 2019-05-21 大连理工大学 A kind of front vehicles parameter identification method based on multitask convolutional neural networks
US11631005B2 (en) 2016-05-31 2023-04-18 Nokia Technologies Oy Method and apparatus for detecting small objects with an enhanced deep neural network
CN106205126B (en) * 2016-08-12 2019-01-15 北京航空航天大学 Large-scale Traffic Network congestion prediction technique and device based on convolutional neural networks
CN106407931B (en) * 2016-09-19 2019-11-22 杭州电子科技大学 A kind of depth convolutional neural networks moving vehicle detection method
TWI638332B (en) * 2016-11-29 2018-10-11 財團法人車輛研究測試中心 Hierarchical object detection system with parallel architecture and method thereof
CN108122234B (en) * 2016-11-29 2021-05-04 北京市商汤科技开发有限公司 Convolutional neural network training and video processing method and device and electronic equipment
CN106599832A (en) * 2016-12-09 2017-04-26 重庆邮电大学 Method for detecting and recognizing various types of obstacles based on convolution neural network
US20180373992A1 (en) * 2017-06-26 2018-12-27 Futurewei Technologies, Inc. System and methods for object filtering and uniform representation for autonomous systems
CN107392189B (en) * 2017-09-05 2021-04-30 百度在线网络技术(北京)有限公司 Method and device for determining driving behavior of unmanned vehicle
CN107563332A (en) * 2017-09-05 2018-01-09 百度在线网络技术(北京)有限公司 For the method and apparatus for the driving behavior for determining unmanned vehicle
CN109726615A (en) * 2017-10-30 2019-05-07 北京京东尚科信息技术有限公司 A kind of recognition methods of road boundary and device
CN109753978B (en) * 2017-11-01 2023-02-17 腾讯科技(深圳)有限公司 Image classification method, device and computer readable storage medium
CN108921003A (en) * 2018-04-26 2018-11-30 东华大学 Unmanned plane obstacle detection method based on convolutional neural networks and morphological image
CN108596115A (en) * 2018-04-27 2018-09-28 济南浪潮高新科技投资发展有限公司 A kind of vehicle checking method, apparatus and system based on convolutional neural networks
CN109657643A (en) * 2018-12-29 2019-04-19 百度在线网络技术(北京)有限公司 A kind of image processing method and device
CN109961509B (en) * 2019-03-01 2020-05-05 北京三快在线科技有限公司 Three-dimensional map generation and model training method and device and electronic equipment
CN110084190B (en) * 2019-04-25 2024-02-06 南开大学 Real-time unstructured road detection method under severe illumination environment based on ANN
CN111222476B (en) * 2020-01-10 2023-06-06 北京百度网讯科技有限公司 Video time sequence action detection method and device, electronic equipment and storage medium
CN113325948B (en) * 2020-02-28 2023-02-07 华为技术有限公司 Air-isolated gesture adjusting method and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027452A1 (en) * 2005-08-31 2007-03-08 Microsoft Corporation Training convolutional neural networks on graphics processing units
CN101742341A (en) * 2010-01-14 2010-06-16 中山大学 Method and device for image processing
CN102750544A (en) * 2012-06-01 2012-10-24 浙江捷尚视觉科技有限公司 Detection system and detection method of rule-breaking driving that safety belt is not fastened and based on plate number recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007027452A1 (en) * 2005-08-31 2007-03-08 Microsoft Corporation Training convolutional neural networks on graphics processing units
CN101742341A (en) * 2010-01-14 2010-06-16 中山大学 Method and device for image processing
CN102750544A (en) * 2012-06-01 2012-10-24 浙江捷尚视觉科技有限公司 Detection system and detection method of rule-breaking driving that safety belt is not fastened and based on plate number recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图像处理中Gamma校正的研究和实现;彭国福;《电子工程师》;20060406;第32卷(第2期);全文 *

Also Published As

Publication number Publication date
CN103279759A (en) 2013-09-04

Similar Documents

Publication Publication Date Title
CN103279759B (en) A kind of vehicle front trafficability analytical procedure based on convolutional neural networks
Serna et al. Classification of traffic signs: The european dataset
CN103902976B (en) A kind of pedestrian detection method based on infrared image
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN103729853B (en) High score remote sensing image building under three-dimension GIS auxiliary damages detection method
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
CN105069472A (en) Vehicle detection method based on convolutional neural network self-adaption
CN103413151A (en) Hyperspectral image classification method based on image regular low-rank expression dimensionality reduction
CN103971123A (en) Hyperspectral image classification method based on linear regression Fisher discrimination dictionary learning (LRFDDL)
CN110532961B (en) Semantic traffic light detection method based on multi-scale attention mechanism network model
CN109389136A (en) Classifier training method
CN103500329B (en) Street lamp automatic extraction method based on vehicle-mounted mobile laser scanning point cloud
CN103714343A (en) Method for splicing and homogenizing road face images collected by double-linear-array cameras under linear laser illumination condition
CN104599292A (en) Noise-resistant moving target detection algorithm based on low rank matrix
CN105005989A (en) Vehicle target segmentation method under weak contrast
CN106372666A (en) Target identification method and device
CN104182985A (en) Remote sensing image change detection method
Yuan et al. Learning to count buildings in diverse aerial scenes
CN112287983B (en) Remote sensing image target extraction system and method based on deep learning
CN104361351A (en) Synthetic aperture radar (SAR) image classification method on basis of range statistics similarity
CN104809724A (en) Automatic precise registration method for multiband remote sensing images
CN103679205A (en) Preceding car detection method based on shadow hypothesis and layered HOG (histogram of oriented gradient) symmetric characteristic verification
CN111666909A (en) Suspected contaminated site space identification method based on object-oriented and deep learning
CN109492700A (en) A kind of Target under Complicated Background recognition methods based on multidimensional information fusion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160601

Termination date: 20190609

CF01 Termination of patent right due to non-payment of annual fee