CN104616032A - Multi-camera system target matching method based on deep-convolution neural network - Google Patents

Multi-camera system target matching method based on deep-convolution neural network Download PDF

Info

Publication number
CN104616032A
CN104616032A CN201510047118.4A CN201510047118A CN104616032A CN 104616032 A CN104616032 A CN 104616032A CN 201510047118 A CN201510047118 A CN 201510047118A CN 104616032 A CN104616032 A CN 104616032A
Authority
CN
China
Prior art keywords
width
image
target
represent
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510047118.4A
Other languages
Chinese (zh)
Other versions
CN104616032B (en
Inventor
王慧燕
王勋
何肖爽
陈卫刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201510047118.4A priority Critical patent/CN104616032B/en
Publication of CN104616032A publication Critical patent/CN104616032A/en
Application granted granted Critical
Publication of CN104616032B publication Critical patent/CN104616032B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

Disclosed is a multi-camera system target matching method based on a deep-convolution neural network. The multi-camera system target matching method based on the deep-convolution neural network comprises initializing multiple convolution kernels on the basis of a local protective projection method, performing downsampling on images through a maximum value pooling method, and through layer-by-layer feature transformation, extracting histogram features higher in robustness and representativeness; performing classification and identification through a multi-category support vector machine (SVM) classifier; when a target enters one camera field of view from another camera field of view, performing feature extraction on the target and marking a corresponding target tag. The multi-camera system target matching method based on the deep-convolution neural network achieves accurate identification of the target in a multi-camera cooperative monitoring area and can be used for target handoff, tracking and the like.

Description

Based on the multi-camera system target matching method of degree of depth convolutional neural networks
Technical field
The invention belongs to the field of intelligent video surveillance in computer vision, be applicable to the target matching method based on degree of depth convolutional neural networks of multiple-camera cooperation video monitoring system.
Background technology
In large-scale video monitoring place, as airport, subway station, square etc., follow the tracks of for the target in multiple-camera cooperation supervisory system, the object matching between multiple-camera is a committed step.For large-scale video monitoring scene, the demarcation of video camera is difficult, complicated, spatial relationship to each other, time relationship and mistiming are difficult to carry out reasoning, the object matching of the mainly feature based of the target matching method between the multiple-camera that therefore at present application is wider, the validity of Feature Selection directly affects the accuracy of matching result.But can be effectively a difficult problem to the extraction of the robust features that target characterizes.Feature conventional at present comprises color, texture etc., and these features are difficult to all keep good robustness in all monitoring scenes.Therefore, we have proposed a kind of target matching method based on degree of depth study, from sequence of frames of video, feature can be learnt out adaptively to realize object matching accurately.Compared with traditional neural network, deep neural network overcomes the less problem of the network number of plies, by successively converting feature, obtains more abstract feature representation, and the last output layer of target classification as network is realized, substantially increases speed and the efficiency of object matching.
Summary of the invention
The present invention will overcome the above-mentioned deficiency of prior art, provide a kind of based on the degree of depth study multiple-camera between target matching method, the concrete steps of the method comprise:
(1) pre-service of target image: the n width target image extracting multiple-camera territory, is divided into m label; Utilize bicubic interpolation algorithm (bicubic interpolation) to be adjusted to h × w by unified for picture size, wherein h is the height of image, and w is the wide of image; Simple scalability is carried out to the pixel value of image pattern, makes final pixel value all drop between [0,1]; The tag storage of n width image is the data of n × 1, the span of each label be [1 ..., m];
(2) feature extraction is carried out based on degree of depth convolutional neural networks:
A () is extracted in the target image obtained and is selected n from step (1) tindividual training sample, as the sensing node of convolutional neural networks ground floor input layer wherein, X t, i=1,2 ..., n trepresent the i-th width image;
B wave filter that () is applied to target image characteristics extraction is a kind of convolution kernel built based on localised protection projecting method, and its concrete construction method is as follows:
To image X tcarry out piecemeal process, a setting point block size is p 1× p 2, then X iwhole piecemeals be: wherein, x i,j, j=1 ..., hw represents X ijth piecemeal vector; Then deduct piecemeal average from each piecemeal, obtain: wherein, j=1 ..., hw represents the piecemeal after removing average; Identical process is done to all input picture X, obtains:
Proper vector is calculated as follows: XLX ta=λ XDX ta, wherein, a is proper vector, and λ is a characteristic of correspondence value, and D is diagonal matrix, its element value be weight matrix W row and or row and; Weight matrix W is dimension is n t× n tsparse matrix, W ijrepresent sample with between connection weight, calculate the Euclidean distance between all samples, for each sample, find the k nearest with it nearestindividual sample, if i.e.: sample at sample k nearestin individual nearest-neighbors, or sample at sample k nearestin individual nearest-neighbors, then otherwise, W ij=0; D ii=∑ jw ji, L=W-D is Laplacian matrix; The proper vector calculated is sorted by its eigenwert size, gets front k 1individual proper vector make V i 1=a i-1, i=1,2 ..., k 1, then be the convolution kernel of extraction;
By convolution kernel V 1with every two field picture carry out convolution, namely i=1 ..., n t, j=1 ..., k 1, then n is produced at this convolutional layer tk 1width output characteristic mapping graph, is expressed as:
C (), for Feature Mapping figure Y obtained above, carries out unique point down-sampling to it based on maximal value pond (maxpooling); If sample window size is s 1× s 1, then n is obtained tk 1width output characteristic mapping graph:
wherein, the i-th width output characteristic mapping graph Z i = &Sigma; j = 1 h s 1 &Sigma; k = 1 w s 1 Z j , k i , Z j , k i = max 0 &le; u , v < s 1 { Y j &CenterDot; s 1 + u , k &CenterDot; s 1 + v i } Represent that the row of the i-th width output characteristic mapping graph arranges, i=1 ..., n tk 1, u, v represent sampling step length, Y irepresent the i-th width input picture, max{.} represents and gets max function; In addition, this algorithm adopts zero lap sampling, namely gets u=v=s 1;
D () adopts the step similar with (b), to image X icarry out piecemeal process, a setting point block size is p 1× p 2, Feature Mapping figure Z step (c) obtained, as the input of this convolutional layer, goes average to the block data of every two field picture, obtains input picture:
wherein, the i-th width input feature vector mapping graph i=1 ..., n tk 1represent the i-th width piecemeal go average after image, j=1 ..., hw represents in the i-th width image the piecemeal vector of the jth after removing average; Structure weight matrix W, and according to ZLZ ta=λ ZDZ ta calculates proper vector, after sorting, gets front k according to eigenwert size 2individual proper vector is as the convolution kernel chosen wherein, V i 2, i=1 ..., k 2represent V 2in i-th convolution kernel; Then the convolution kernel V obtained is utilized 2to every two field picture carry out convolution, then produce n at this convolutional layer tk 1k 2width output characteristic mapping graph:
wherein, i=1 ..., n tk 1, j=1 ..., k 2;
E (), for Feature Mapping figure U obtained above, adopts the step similar with (c), carries out unique point down-sampling to it based on maximal value pond method; If sample window size is s 2× s 2, then n is obtained tk 1k 2width output characteristic mapping graph:
wherein, the i-th width output characteristic mapping graph represent that the row of the i-th width output characteristic mapping graph arranges, i=1 ..., n tk 1k 2, j = 1 , . . . , ( h s 1 - s 2 ) u + 1 , k = 1 , . . . , ( w s 1 - s 2 ) v + 1 ,
, v represents sampling step length, U irepresent the i-th width input picture, max{.} represents and gets max function; In addition, this algorithm adopts zero lap sampling, namely gets u=v=s 2;
F () makes P i = ( O ( i - 1 ) k 2 + 1 , . . . , O ( i - 1 ) k 2 + k 2 ) , I=1 ..., n tk 1, namely get the every k in O 2width image is one group, and being carried out Heaviside two-value quantification aftertreatment is decimal value, then every k 2width image converts piece image to i=1 ..., n tk 1, wherein, H () represents Heaviside function, P i jrepresent P iin jth width image, T irepresent decimal system result, span is then often k is got 1width T iimage is one group, first every width image is divided into B block, then calculates the histogram feature in every block region, then the B block histogram feature obtained is connected into row vector, is defined as wherein, l=1 ..., n t, s=1 ..., k 1; Then for the every width image X in (a) l, finally extract based on convolutional neural networks and obtain eigenvector l=1 ..., n t;
(3) Classification and Identification: the feature that said extracted is obtained as input, target labels corresponding to each eigenvector, as output, builds the sorter model obtaining target by multi-class support vector machine SVM.Based on this sorter model, can realize marking the target in the different cameras visual field and classifying, for target handoff and tracking etc.
The invention has the beneficial effects as follows:
This invention is owing to adopting localised protection projecting method initialization convolution kernel; but not random initializtion convolution kernel; accurately can retain the notable feature of target image; thus the histogram feature extracted can be maintained the invariance to the dimensional variation of target and rotation; to the illumination variation of scene, there is stronger adaptability, greatly increase the discrimination of target.This invention carries out down-sampling to the image after use localised protection projecting method convolution; effectively reduce intrinsic dimensionality; avoid dimension disaster; substantially reduce the recognition time of target; and adopt multireel to amass collecting image and carry out convolution and Superposition Characteristics, effectively eliminate the discrimination caused because of dimensionality reduction and decline.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention.
Embodiment
The inventive method comprises the extraction of target signature and Classification and Identification two parts of target.Wherein, clarification of objective is extracted and is adopted degree of depth learning method, by building the neural network of many hidden layers, sample characteristics is successively converted, sample is transformed to a new feature space at the character representation in former space, learns more useful feature, then use this feature as the input feature vector of Multi-class SVM classifier, carry out the Classification and Identification of target, thus the final accuracy promoting classification or prediction.Figure 1 show the enforcement block diagram of this algorithm, concrete steps are as follows:
(1) pre-service of target image: the n width target image extracting multiple-camera territory, is divided into m label; Utilize bicubic interpolation algorithm (bicubic interpolation) to be adjusted to h × w by unified for picture size, wherein h is the height of image, and w is the wide of image; Simple scalability is carried out to the pixel value of image pattern, makes final pixel value all drop between [0,1]; The tag storage of n width image is the data of n × 1, the span of each label be [1 ..., m]; Namely 001 represent pretreated image;
(2) feature extraction is carried out based on degree of depth convolutional neural networks:
A () selects n from 001 tindividual training sample, as the sensing node of convolutional neural networks ground floor input layer namely shown in 002, wherein, X i, i=1,2 ..., n trepresent the i-th width image;
B wave filter that () is applied to target image characteristics extraction is a kind of convolution kernel built based on localised protection projecting method, and its concrete construction method is as follows:
To image X in 002 icarry out piecemeal process, a setting point block size is p 1× p 2, then X iwhole piecemeals be: wherein, x i,j, j=1 ..., hw represents X ijth piecemeal vector; Then deduct piecemeal average from each piecemeal, obtain: wherein, j=1 ..., hw represents the piecemeal after removing average; Identical process is done to all input picture X, obtains: namely shown in 003;
Proper vector is calculated as follows: XLX ta=λ XDX ta, wherein, a is proper vector, and λ is a characteristic of correspondence value, and D is diagonal matrix, its element value be weight matrix W row and or row and; Weight matrix W is dimension is n t× n tsparse matrix, W ijrepresent sample with between connection weight, calculate the Euclidean distance between all samples, for each sample, find the k nearest with it nearestindividual sample, if i.e.: sample at sample k nearestin individual nearest-neighbors, or sample at sample k nearestin individual nearest-neighbors, then otherwise, W ij=0; D ii=∑ jw ji, L=W-D is Laplacian matrix; The proper vector calculated is sorted by its eigenwert size, gets front k 1individual proper vector make V i 1=a i-1, i=1,2 ..., k 1, then be the convolution kernel of extraction;
By convolution kernel V 1with 003 in every two field picture carry out convolution, i=1 ..., n t, j=1 ..., k 1, then n is produced at this convolutional layer tk 1width output characteristic mapping graph, is expressed as: namely shown in 004;
C (), for the Feature Mapping figure Y shown in 004, carries out unique point down-sampling to it based on maximal value pond (maxpooling); If sample window size is s 1× s 1, then n is obtained tk 1width output characteristic mapping graph:
namely shown in 005, wherein, the i-th width output characteristic mapping graph represent the jth row kth row of the i-th width output characteristic mapping graph, i=1 ..., n tk 1, j = 1 , . . . , ( h - s 1 ) u + 1 , , k = 1 , . . . , ( w - s 1 ) v + 1 , U, v represent sampling step length, Y irepresent the i-th width input picture, max{.} represents and gets max function; In addition, this algorithm adopts zero lap sampling, namely gets u=v=s 1;
D () adopts and (b) similar step, to image X in 005 icarry out piecemeal process, a setting point block size is p 1× p 2, Feature Mapping figure Z step (c) obtained, as the input of this convolutional layer, goes average to the block data of every two field picture, obtains input picture:
namely shown in 006, wherein, the i-th width input feature vector mapping graph i=1 ..., n tk 1represent the i-th width piecemeal go average after image, j=1 ..., hw represents in the i-th width image the piecemeal vector of the jth after removing average; Structure weight matrix W, and according to ZLZ ta=λ ZDZ ta calculates proper vector, after sorting, gets front k according to eigenwert size 2individual proper vector is as the convolution kernel chosen wherein, V i 2, i=1 ..., k 2represent V 2in i-th convolution kernel; Then the convolution kernel V obtained is utilized 2to every two field picture carry out convolution, then produce n at this convolutional layer tk 1k 2width output characteristic mapping graph:
namely shown in 007, wherein, i=1 ..., n tk 1, j=1 ..., k 2;
E (), for the Feature Mapping figure U shown in 007, adopts the step similar with (c), carries out unique point down-sampling to it based on maximal value pond; If sample window size is s 2× s 2, then n is obtained tk 1k 2width output characteristic mapping graph:
namely shown in 008, wherein, the i-th width output characteristic mapping graph represent the jth row kth row of the i-th width output characteristic mapping graph, i=1 ..., n tk 1k 2, j = 1 , . . . , ( h s 1 - s 2 ) u + 1 , k = 1 , . . . , ( w s 1 - s 2 ) v + 1 ,
, v represents sampling step length, U irepresent the i-th width input picture, max{.} represents and gets max function; In addition, this algorithm adopts zero lap sampling, namely gets u=v=s 2;
F () makes P i = ( O ( i - 1 ) k 2 + 1 , . . . , O ( i - 1 ) k 2 + k 2 ) , I=1 ..., n tk 1, namely get every k in the O shown in 008 2width image is one group, and being carried out Heaviside two-value quantification aftertreatment is decimal value, then every k 2width image converts piece image to i=1 ..., n tk 1, wherein, H () represents Heaviside function, P i jrepresent P iin jth width image, T irepresent decimal system result, span is then often k is got 1width T iimage is one group, first every width image is divided into B block, then calculates the histogram feature in every block region, then the B block histogram feature obtained is connected into row vector, is defined as wherein, l=1 ..., n t, s=1 ..., k 1; Then for the every width image X in 002 l, finally extract based on convolutional neural networks and obtain eigenvector l=1 ..., n t;
(3) Classification and Identification: the feature that said extracted is obtained as input, target labels corresponding to each eigenvector, as output, builds the sorter model obtaining target by multi-class support vector machine SVM.Based on this sorter model, can realize marking the target in the different cameras visual field and classifying, for target handoff and tracking etc.
Content described in this instructions embodiment is only enumerating the way of realization of inventive concept; protection scope of the present invention should not be regarded as being only limitted to the concrete form that embodiment is stated, protection scope of the present invention also and conceive the equivalent technologies means that can expect according to the present invention in those skilled in the art.

Claims (1)

1., based on a multi-camera system target matching method for degree of depth convolutional neural networks, its feature comprises:
(1) pre-service of target image: the n width target image extracting multiple-camera territory, is divided into m label; Utilize bicubic interpolation algorithm (bicubic interpolation) to be adjusted to h × w by unified for picture size, wherein h is the height of image, and w is the wide of image; Simple scalability is carried out to the pixel value of image pattern, makes final pixel value all drop between [0,1]; The tag storage of n width image is the data of n × 1, the span of each label be [1 ..., m];
(2) feature extraction is carried out based on degree of depth convolutional neural networks:
A () is extracted in the target image obtained and is selected n from step (1) tindividual training sample, as the sensing node of convolutional neural networks ground floor input layer wherein, X i, i=1,2 ..., n trepresent the i-th width image;
B wave filter that () is applied to target image characteristics extraction is a kind of convolution kernel built based on localised protection projecting method, and its concrete construction method is as follows:
To image X icarry out piecemeal process, a setting point block size is p 1× p 2, then X iwhole piecemeals be: wherein, x i,j, j=1 ..., hw represents X ijth piecemeal vector; Then deduct piecemeal average from each piecemeal, obtain: wherein, j=1 ..., hw represents the piecemeal after removing average; Identical process is done to all input picture X, obtains:
Proper vector is calculated as follows: XLX ta=λ XDX ta, wherein, a is proper vector, and λ is a characteristic of correspondence value, and D is diagonal matrix, its element value be weight matrix W row and or row and; Weight matrix W is dimension is n t× n tsparse matrix, W ijrepresent sample with between connection weight, calculate the Euclidean distance between all samples, for each sample, find the k nearest with it nearestindividual sample, if i.e.: sample at sample k nearestin individual nearest-neighbors, or sample at sample k nearestin individual nearest-neighbors, then otherwise, W ij=0; D ii=∑ jw ji, L=W-D is Laplacian matrix; The proper vector calculated is sorted by its eigenwert size, gets front k 1individual proper vector order i=1,2 ..., k 1, then be the convolution kernel of extraction;
By convolution kernel V 1with every two field picture carry out convolution, namely i=1 ..., n t, j=1 ..., k 1, then n is produced at this convolutional layer tk 1width output characteristic mapping graph, is expressed as:
C (), for Feature Mapping figure Y obtained above, carries out unique point down-sampling to it based on maximal value pond (maxpooling); If sample window size is s 1× s 1, then n is obtained tk 1width output characteristic mapping graph: wherein, the i-th width output characteristic mapping graph Z i = &Sigma; j = 1 h s 1 &Sigma; k = 1 w s 1 Z j , k i , Z j , k i = max 0 &le; u , v < s 1 { Y j &CenterDot; s 1 + u , k &CenterDot; s 1 + v i } Represent the jth row kth row of the i-th width output characteristic mapping graph, i=1 ..., n tk 1, u, v represent sampling step length, Y irepresent the i-th width input picture, max{.} represents and gets max function; In addition, this algorithm adopts zero lap sampling, namely gets u=v=s 1;
D () adopts the step similar with (b), to image X icarry out piecemeal process, a setting point block size is p 1× p 2, Feature Mapping figure Z step (c) obtained, as the input of this convolutional layer, goes average to the block data of every two field picture, obtains input picture: wherein, the i-th width input feature vector mapping graph i=1 ..., n tk 1represent the i-th width piecemeal go average after image, j=1 ..., hw represents in the i-th width image the piecemeal vector of the jth after removing average; Structure weight matrix W, and according to ZLZ ta=λ ZDZ ta calculates proper vector, after sorting, gets front k according to eigenwert size 2individual proper vector is as the convolution kernel chosen wherein, i=1 ..., k 2represent V 2in i-th convolution kernel; Then the convolution kernel V obtained is utilized 2to every two field picture carry out convolution, then produce n at this convolutional layer tk 1k 2width output characteristic mapping graph: wherein, i=1 ..., n tk 1, j=1 ..., k 2;
E (), for Feature Mapping figure U obtained above, adopts the step similar with (c), carries out unique point down-sampling to it based on maximal value pond; If sample window size is s 2× s 2, then n is obtained tk 1k 2width output characteristic mapping graph: wherein, the i-th width output characteristic mapping graph O j , k i = max 0 &le; u , v < s 2 { U j &CenterDot; s 2 + u , k &CenterDot; s 2 + v i } Represent the jth row kth row of the i-th width output characteristic mapping graph, i=1 ..., n tk 1k 2, j = 1 , . . . , ( h s 1 - s 2 ) u + 1 , k = 1 , . . . , ( w s 1 - s 2 ) v + 1 , U, v represent sampling step length, U irepresent the i-th width input picture, max{.} represents and gets max function; In addition, this algorithm adopts zero lap sampling, namely gets u=v=s 2;
F () makes P i = ( O ( i - 1 ) k 2 + 1 , . . . , O ( i - 1 ) k 2 + k 2 ) , I=1 ..., n tk 1, namely get the every k in O 2width image is one group, and being carried out Heaviside two-value quantification aftertreatment is decimal value, then every k 2width image converts piece image to i=1 ..., n tk 1, wherein, H () represents Heaviside function, represent P iin jth width image, T irepresent decimal system result, span is then often k is got 1width T iimage is one group, first every width image is divided into B block, then calculates the histogram feature in every block region, then the B block histogram feature obtained is connected into row vector, is defined as wherein, l=1 ..., n t, s=1 ..., k 1; Then for the every width image X in (a) l, finally extract based on convolutional neural networks and obtain eigenvector l=1 ..., n t;
(3) Classification and Identification: the feature that said extracted is obtained as input, target labels corresponding to each eigenvector, as output, builds the sorter model obtaining target by multi-class support vector machine SVM.Based on this sorter model, can realize marking the target in the different cameras visual field and classifying, for target handoff and tracking etc.
CN201510047118.4A 2015-01-30 2015-01-30 Multi-camera system target matching method based on depth convolutional neural networks Expired - Fee Related CN104616032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510047118.4A CN104616032B (en) 2015-01-30 2015-01-30 Multi-camera system target matching method based on depth convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510047118.4A CN104616032B (en) 2015-01-30 2015-01-30 Multi-camera system target matching method based on depth convolutional neural networks

Publications (2)

Publication Number Publication Date
CN104616032A true CN104616032A (en) 2015-05-13
CN104616032B CN104616032B (en) 2018-02-09

Family

ID=53150469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510047118.4A Expired - Fee Related CN104616032B (en) 2015-01-30 2015-01-30 Multi-camera system target matching method based on depth convolutional neural networks

Country Status (1)

Country Link
CN (1) CN104616032B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850836A (en) * 2015-05-15 2015-08-19 浙江大学 Automatic insect image identification method based on depth convolutional neural network
CN105138973A (en) * 2015-08-11 2015-12-09 北京天诚盛业科技有限公司 Face authentication method and device
CN105320961A (en) * 2015-10-16 2016-02-10 重庆邮电大学 Handwriting numeral recognition method based on convolutional neural network and support vector machine
CN105354560A (en) * 2015-11-25 2016-02-24 小米科技有限责任公司 Fingerprint identification method and device
CN105373796A (en) * 2015-10-23 2016-03-02 北京天诚盛业科技有限公司 Operating method and device for activating image and application thereof
CN105719313A (en) * 2016-01-18 2016-06-29 中国石油大学(华东) Moving object tracking method based on intelligent real-time video cloud
CN106203318A (en) * 2016-06-29 2016-12-07 浙江工商大学 The camera network pedestrian recognition method merged based on multi-level depth characteristic
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
WO2017004803A1 (en) * 2015-07-08 2017-01-12 Xiaoou Tang An apparatus and a method for semantic image labeling
CN106407891A (en) * 2016-08-26 2017-02-15 东方网力科技股份有限公司 Target matching method based on convolutional neural network and device
CN106504190A (en) * 2016-12-29 2017-03-15 浙江工商大学 A kind of three-dimensional video-frequency generation method based on 3D convolutional neural networks
CN106611177A (en) * 2015-10-27 2017-05-03 北京航天长峰科技工业集团有限公司 Big data-based image classification method
CN106709441A (en) * 2016-12-16 2017-05-24 北京工业大学 Convolution theorem based face verification accelerating method
CN106980880A (en) * 2017-03-06 2017-07-25 北京小米移动软件有限公司 The method and device of images match
CN106991428A (en) * 2017-02-24 2017-07-28 中国科学院合肥物质科学研究院 Insect image-recognizing method based on adaptive pool model
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion
CN107092935A (en) * 2017-04-26 2017-08-25 国家电网公司 A kind of assets alteration detection method
CN107393523A (en) * 2017-07-28 2017-11-24 深圳市盛路物联通讯技术有限公司 A kind of noise monitoring method and system
WO2017206156A1 (en) * 2016-06-03 2017-12-07 Intel Corporation Look-up convolutional layer in convolutional neural network
CN107844795A (en) * 2017-11-18 2018-03-27 中国人民解放军陆军工程大学 Convolutional neural networks feature extracting method based on principal component analysis
WO2018076122A1 (en) * 2016-10-31 2018-05-03 Twenty Billion Neurons GmbH System and method for improving the prediction accuracy of a neural network
WO2018145308A1 (en) * 2017-02-13 2018-08-16 Nokia Technologies Oy Filter reusing mechanism for constructing robust deep convolutional neural network
CN108572183A (en) * 2017-03-08 2018-09-25 清华大学 The method for checking equipment and dividing vehicle image
CN109146921A (en) * 2018-07-02 2019-01-04 华中科技大学 A kind of pedestrian target tracking based on deep learning
CN110320452A (en) * 2019-06-21 2019-10-11 河南理工大学 A kind of series fault arc detection method
CN110892693A (en) * 2017-05-11 2020-03-17 维尔蒂姆知识产权有限公司 System and method for biometric identification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036323A (en) * 2014-06-26 2014-09-10 叶茂 Vehicle detection method based on convolutional neural network
CN104077613A (en) * 2014-07-16 2014-10-01 电子科技大学 Crowd density estimation method based on cascaded multilevel convolution neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036323A (en) * 2014-06-26 2014-09-10 叶茂 Vehicle detection method based on convolutional neural network
CN104077613A (en) * 2014-07-16 2014-10-01 电子科技大学 Crowd density estimation method based on cascaded multilevel convolution neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HONG LI等: "Infrared moving target detection and tracking based on tensor locality preserving projection", 《INFRARED PHYSICS & TECHNOLOGY》 *
JIWEN LU等: "Palmprint Recognition via Locality Preserving Projections and Extreme Learning Machine Neural Network", 《INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING》 *
KAI KANG等: "Fully Convolutional Neural Networks for Crowd Segmentation", 《COMPUTER SCIENCE》 *
MANDAR CHAUDHARY等: "Similar looking Gujarati printed character recognition using Locality Preserving Projection and Artificial Neural Networks", 《2012 THIRD INTERNATIONAL CONFERENCE ON EMERGING APPLICATIONS OF INFORMATION TECHNOLOGY》 *
宋绍云等: "基于误差投影和局部投影的RBF神经网络学习算法", 《玉溪师范学院学报》 *
王荣秀等: "基于局部保持投影和RBF神经网络的DOA估计", 《科学技术与工程》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850836A (en) * 2015-05-15 2015-08-19 浙江大学 Automatic insect image identification method based on depth convolutional neural network
CN104850836B (en) * 2015-05-15 2018-04-10 浙江大学 Insect automatic distinguishing method for image based on depth convolutional neural networks
CN107851174B (en) * 2015-07-08 2021-06-01 北京市商汤科技开发有限公司 Image semantic annotation equipment and method, and generation method and system of image semantic annotation model
US10699170B2 (en) 2015-07-08 2020-06-30 Beijing Sensetime Technology Development Co., Ltd. Apparatuses and methods for semantic image labeling
WO2017004803A1 (en) * 2015-07-08 2017-01-12 Xiaoou Tang An apparatus and a method for semantic image labeling
CN107851174A (en) * 2015-07-08 2018-03-27 北京市商汤科技开发有限公司 The apparatus and method of linguistic indexing of pictures
CN105138973A (en) * 2015-08-11 2015-12-09 北京天诚盛业科技有限公司 Face authentication method and device
CN105138973B (en) * 2015-08-11 2018-11-09 北京天诚盛业科技有限公司 The method and apparatus of face authentication
CN105320961A (en) * 2015-10-16 2016-02-10 重庆邮电大学 Handwriting numeral recognition method based on convolutional neural network and support vector machine
CN105373796A (en) * 2015-10-23 2016-03-02 北京天诚盛业科技有限公司 Operating method and device for activating image and application thereof
CN105373796B (en) * 2015-10-23 2019-01-25 河南眼神科技有限公司 The method, apparatus and its application of image activation operation
CN106611177A (en) * 2015-10-27 2017-05-03 北京航天长峰科技工业集团有限公司 Big data-based image classification method
CN105354560A (en) * 2015-11-25 2016-02-24 小米科技有限责任公司 Fingerprint identification method and device
CN105719313A (en) * 2016-01-18 2016-06-29 中国石油大学(华东) Moving object tracking method based on intelligent real-time video cloud
CN105719313B (en) * 2016-01-18 2018-10-23 青岛邃智信息科技有限公司 A kind of motion target tracking method based on intelligent real-time video cloud
US11048970B2 (en) 2016-06-03 2021-06-29 Intel Corporation Look-up convolutional layer in convolutional neural network
WO2017206156A1 (en) * 2016-06-03 2017-12-07 Intel Corporation Look-up convolutional layer in convolutional neural network
CN106203318B (en) * 2016-06-29 2019-06-11 浙江工商大学 Camera network pedestrian recognition method based on the fusion of multi-level depth characteristic
CN106203318A (en) * 2016-06-29 2016-12-07 浙江工商大学 The camera network pedestrian recognition method merged based on multi-level depth characteristic
CN106227851B (en) * 2016-07-29 2019-10-01 汤一平 The image search method of depth of seam division search based on depth convolutional neural networks
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
CN106407891B (en) * 2016-08-26 2019-06-28 东方网力科技股份有限公司 Target matching method and device based on convolutional neural networks
CN106407891A (en) * 2016-08-26 2017-02-15 东方网力科技股份有限公司 Target matching method based on convolutional neural network and device
WO2018036146A1 (en) * 2016-08-26 2018-03-01 东方网力科技股份有限公司 Convolutional neural network-based target matching method, device and storage medium
WO2018076122A1 (en) * 2016-10-31 2018-05-03 Twenty Billion Neurons GmbH System and method for improving the prediction accuracy of a neural network
CN106709441A (en) * 2016-12-16 2017-05-24 北京工业大学 Convolution theorem based face verification accelerating method
CN106709441B (en) * 2016-12-16 2019-01-29 北京工业大学 A kind of face verification accelerated method based on convolution theorem
CN106504190B (en) * 2016-12-29 2019-09-13 浙江工商大学 A kind of three-dimensional video-frequency generation method based on 3D convolutional neural networks
CN106504190A (en) * 2016-12-29 2017-03-15 浙江工商大学 A kind of three-dimensional video-frequency generation method based on 3D convolutional neural networks
WO2018145308A1 (en) * 2017-02-13 2018-08-16 Nokia Technologies Oy Filter reusing mechanism for constructing robust deep convolutional neural network
CN106991428A (en) * 2017-02-24 2017-07-28 中国科学院合肥物质科学研究院 Insect image-recognizing method based on adaptive pool model
CN106980880A (en) * 2017-03-06 2017-07-25 北京小米移动软件有限公司 The method and device of images match
CN108572183A (en) * 2017-03-08 2018-09-25 清华大学 The method for checking equipment and dividing vehicle image
US10796436B2 (en) 2017-03-08 2020-10-06 Nuctech Company Limited Inspection apparatuses and methods for segmenting an image of a vehicle
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion
CN106991396B (en) * 2017-04-01 2020-07-14 南京云创大数据科技股份有限公司 Target relay tracking algorithm based on intelligent street lamp partner
CN107092935A (en) * 2017-04-26 2017-08-25 国家电网公司 A kind of assets alteration detection method
CN110892693A (en) * 2017-05-11 2020-03-17 维尔蒂姆知识产权有限公司 System and method for biometric identification
CN107393523B (en) * 2017-07-28 2020-11-13 深圳市盛路物联通讯技术有限公司 Noise monitoring method and system
CN107393523A (en) * 2017-07-28 2017-11-24 深圳市盛路物联通讯技术有限公司 A kind of noise monitoring method and system
CN107844795A (en) * 2017-11-18 2018-03-27 中国人民解放军陆军工程大学 Convolutional neural networks feature extracting method based on principal component analysis
CN107844795B (en) * 2017-11-18 2018-09-04 中国人民解放军陆军工程大学 Convolutional neural networks feature extracting method based on principal component analysis
CN109146921A (en) * 2018-07-02 2019-01-04 华中科技大学 A kind of pedestrian target tracking based on deep learning
CN109146921B (en) * 2018-07-02 2021-07-27 华中科技大学 Pedestrian target tracking method based on deep learning
CN110320452A (en) * 2019-06-21 2019-10-11 河南理工大学 A kind of series fault arc detection method

Also Published As

Publication number Publication date
CN104616032B (en) 2018-02-09

Similar Documents

Publication Publication Date Title
CN104616032A (en) Multi-camera system target matching method based on deep-convolution neural network
Yuan et al. Large-scale solar panel mapping from aerial images using deep convolutional networks
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN108665481B (en) Self-adaptive anti-blocking infrared target tracking method based on multi-layer depth feature fusion
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN111915592B (en) Remote sensing image cloud detection method based on deep learning
Jiao et al. A configurable method for multi-style license plate recognition
CN107239759B (en) High-spatial-resolution remote sensing image transfer learning method based on depth features
Jiang et al. Deep neural networks-based vehicle detection in satellite images
CN103218621B (en) The recognition methods of multiple dimensioned vehicle in a kind of life outdoor videos monitoring
CN107092884B (en) Rapid coarse-fine cascade pedestrian detection method
CN107767416B (en) Method for identifying pedestrian orientation in low-resolution image
Zakir et al. Road sign detection and recognition by using local energy based shape histogram (LESH)
CN107944354B (en) Vehicle detection method based on deep learning
CN107480585A (en) Object detection method based on DPM algorithms
Bhagya et al. An overview of deep learning based object detection techniques
Lv et al. An adaptive multifeature sparsity-based model for semiautomatic road extraction from high-resolution satellite images in urban areas
Yu et al. A cascaded deep convolutional network for vehicle logo recognition from frontal and rear images of vehicles
Zamberletti et al. Augmented text character proposals and convolutional neural networks for text spotting from scene images
Bai et al. The generalized detection method for the dim small targets by faster R-CNN integrated with GAN
Chen et al. Vehicles detection on expressway via deep learning: Single shot multibox object detector
CN108734200A (en) Human body target visible detection method and device based on BING features
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN111667465A (en) Metal hand basin defect detection method based on far infrared image
Xiang et al. An effective and robust multi-view vehicle classification method based on local and structural features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Wang Huiyan

Inventor after: Hua Jing

Inventor before: Wang Huiyan

Inventor before: Wang Xun

Inventor before: He Xiaoshuang

Inventor before: Chen Weigang

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180209

Termination date: 20200130