CN108345866A - A kind of pedestrian's recognition methods again based on depth characteristic study - Google Patents
A kind of pedestrian's recognition methods again based on depth characteristic study Download PDFInfo
- Publication number
- CN108345866A CN108345866A CN201810191123.6A CN201810191123A CN108345866A CN 108345866 A CN108345866 A CN 108345866A CN 201810191123 A CN201810191123 A CN 201810191123A CN 108345866 A CN108345866 A CN 108345866A
- Authority
- CN
- China
- Prior art keywords
- horizontal component
- image
- neural network
- deep neural
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Abstract
The embodiment of the invention discloses a kind of pedestrian's recognition methods again based on depth characteristic study, this method includes:Build deep neural network model;The training image that training image is concentrated is divided into N number of horizontal component, and distributes class label;Training deep neural network model;The test image that test image is concentrated is divided into N number of horizontal component;The feature of test image is extracted using deep neural network model;The similarity score of feature calculation based on images to be recognized and test image between the two, and then obtain the pedestrian of images to be recognized recognition result again.The present invention distributes a class label by each horizontal component to image, so as to make full use of the advantage of local feature to go to train network, and then improves pedestrian and identifies matched accuracy again.
Description
Technical field
The invention belongs to pattern-recognition, artificial intelligence fields, and in particular to it is a kind of based on depth characteristic study pedestrian again
Recognition methods.
Background technology
Pedestrian identifies that (Person Re-identification) is one and has immense value in practical applications again
Research direction, it can be applied to the fields such as criminal investigation, image retrieval.Pedestrian identifies again to be intended to search from large scale database
Rope goes out specific pedestrian.
Recognition methods is concentrated mainly on two aspects, i.e. character representation and metric learning to existing pedestrian again.With regard to character representation
For, many methods are absorbed in the discriminating power for improving feature.It is retouched for example, Zeng et al. proposes that a kind of effective pedestrian identifies again
State son --- mixing histogram and covariance descriptor (Hybrid Spatiogram and Covariance Descriptor,
HSCD).Matsukawa et al. proposes to utilize layering Gauss descriptor (hierarchical Gaussian descriptor) table
Show the regional area of pedestrian image.Variord et al. utilize data-driven frame, by combination learning linear transformation and dictionary come
Encoded pixel values.For metric learning, Xiong et al. proposes core locally expense snow identification and classification device (Kernel Local
Fisher Discriminant Classifier, KLFDA), it handles high-dimensional feature while most using interior geo-nuclear tracin4
Bigization takes snow standard.In order to learn a kind of differentiation measurement of lower-dimensional subspace, Liao et al. introduces a kind of effective method and claims
For cross-view quadratic discriminatory analysis (Cross-view Quadratic Discriminant Analysis, XQDA).
In recent years, convolutional neural networks have been widely used in the character representation that pedestrian identifies again.For example, Yi et al. is carried
Go out to divide the image into three laps, and trains three networks to capture the different statistical properties of pedestrian image.Zhao etc.
People thinks that different body parts has different importance, therefore they propose to extract part according to the spatial structural form of part
The method of feature.Zheng et al. proposes to utilize identity incorporation model (Identification Embedding model, IDE)
Carry out feature extraction.Although above method achieves larger success, they only considered the factor of one side, that is, want
Only consider local feature or only considers the stronger network of identification.It is thus impossible to fully excavate with stronger identification
Local feature.
Invention content
The object of the present invention is to provide a kind of pedestrian's recognition methods again based on depth characteristic study, to solve the prior art
Can not be the technical issues of horizontal component of pedestrian image distributes class label.
In order to realize the purpose, recognition methods includes a kind of pedestrian based on depth characteristic study proposed by the present invention again
Following steps:
Step S1 builds deep neural network model;
Step S2 obtains training image collection, every width training image that the training image is concentrated is divided into N number of horizontal part
Point, and distribute a class label for each horizontal component;
Step S3 trains the deep neural network model using the horizontal component with class label;
Step S4 obtains test chart image set, and every width test image that test image is concentrated is divided into N number of horizontal component;
Step S5 extracts the test image using trained deep neural network model and concentrates every width test image water
The feature of flat part, and connect every Partial Feature to form the final feature of respective image;
Step S6 obtains images to be recognized and its feature, and the feature calculation based on images to be recognized and test image waits for
Identify the similarity score between image and every width test image;
Step S7, is ranked up similarity score, and the test image of highest scoring is considered and the images to be recognized
It is identical pedestrian, and then obtains the pedestrian of images to be recognized recognition result again.
Optionally, the step S1 includes the following steps:
Step S11 initializes the parameter of the deep neural network using default residual error network structure;
The neuron number of the full articulamentum of the last one in the deep neural network is replaced with K by step S12,
Middle K is positive integer, is the quantity of pedestrian's classification;
Flexible largest unit is arranged in step S13 after full articulamentum;
Step S14 builds loss function after flexible largest unit, obtains the deep neural network model.
Optionally, the step S2 includes the following steps:
Every width training image is divided into N number of horizontal component by step S21 without overlapping from top to bottom;
Step S22 distributes class label for each horizontal component.
Optionally, the class label q of each horizontal componentpgr(k, n) is expressed as:
Wherein, n ∈ { 1,2 ..., N } indicate n-th of horizontal component in training image, and N is horizontal component in training image
Quantity, k indicate entire image prediction class label, y indicate the true class label of entire image, K indicate pedestrian's classification
Quantity, ε ∈ [0,1] are a smoothing parameters, and α > 1 are a hyper parameters.
Optionally, the step S3 includes the following steps:
Step S31, it is p × p that the horizontal component with class label, which is normalized to pixel size, and wherein p is positive integer;
Step S32 trains the deep neural network mould using normalized horizontal component and its corresponding class label
Type.
Optionally, the step S32 includes the following steps:
Step S321, before normalized horizontal component and its class label are sent into the deep neural network model progress
To propagation;
Step S322 carries out backpropagation after propagated forward.
Optionally, the step S321 includes the following steps:
Deep neural network model training parameter is arranged in step S3211;
The loss function of the deep neural network model is arranged in step S3212;
Propagated forward loss is calculated by the loss function and deep neural network model parameter in step S3213
Value.
Optionally, the loss function Lpgr(n) it may be configured as:
Wherein, p (k) indicates that certain level part is predicted to be the probability of kth class.
Optionally, further include that simplified step is carried out for loss function after the step S3212.
Optionally, the step S5 includes the following steps:
The horizontal component of the test image is normalized to p × p sizes by step S51;
Step S52 extracts the test image using trained deep neural network model and concentrates every width test image
The feature of horizontal component;
The feature of every part is together in series to form final feature by step S53.
Beneficial effects of the present invention are:The present invention utilizes one by making full use of the network and local feature of identification
The function of a gradual change is that a class label is distributed in each of pedestrian image part, obtains the effect of better supervised learning, reaches
To abundant excavating depth network to the purpose of pedestrian's character representation, matched accuracy is identified again to improve pedestrian.
It should be noted that the present invention obtained project of national nature science fund project No.61501327,
No.61711530240, Tianjin Natural Science Fund In The Light key project No.17JCZDJC30600, Tianjin application foundation is with before
Along technical research plan youth fund project No.15JCQNJC01700, " young scientific research top-notch personnel cultivates for Tianjin Normal University
Plan " No.135202RC1703, pattern-recognition National Key Laboratory opening project fund No.201700001,
No.201800002, the subsidy of China national fund for studying abroad No.201708120039, No.201708120040.
Description of the drawings
Fig. 1 is a kind of flow of pedestrian learnt based on depth characteristic according to an embodiment of the invention recognition methods again
Figure.
Fig. 2 is the network model according to an embodiment of the invention learnt based on depth characteristic.
Specific implementation mode
In order to make the objectives, technical solutions and advantages of the present invention clearer, With reference to embodiment and join
According to attached drawing, the present invention is described in more detail.It should be understood that these descriptions are merely illustrative, and it is not intended to limit this hair
Bright range.In addition, in the following description, descriptions of well-known structures and technologies are omitted, to avoid this is unnecessarily obscured
The concept of invention.
Fig. 1 is a kind of flow of pedestrian learnt based on depth characteristic according to an embodiment of the invention recognition methods again
Figure illustrates some specific implementation flows of the present invention by taking Fig. 1 as an example below.As shown in Figure 1, the present invention is a kind of special based on depth
Recognition methods includes the following steps the pedestrian for levying study again:
Step S1 builds deep neural network model;
Wherein, the step S1 includes the following steps:
Step S11 initializes the parameter of the deep neural network using default residual error network structure;
Wherein, the residual error network can utilize some existing residual error networks, for example can select residual error network -50
(ResNet-50)。
Step S12, in the deep neural network (deep neural network i.e. through the step S11 initiation parameters)
The neuron number of the last one full articulamentum replaces with K, and wherein K is positive integer, is the quantity of pedestrian's classification;
In an embodiment of the present invention, the number of pedestrian's classification is 751, then K=751.
Flexible largest unit is arranged in order to standardize the output of full articulamentum in step S13 after full articulamentum
(softmax);
Step S14 builds loss function after flexible largest unit, obtains the deep neural network model.
Step S2 obtains training image collection, every width training image that the training image is concentrated is divided into N number of horizontal part
Point, and distribute a class label for each horizontal component;
Wherein, the step S2 includes the following steps:
Every width training image is divided into N number of horizontal component by step S21 without overlapping from top to bottom;
In an embodiment of the present invention, it calculates for convenience, every width training image is divided into N without overlapping from top to bottom
A horizontal component.
Step S22, for each horizontal component distribute class label, each horizontal component class label qpgr (k, n) and table
It is shown as:
Wherein, n ∈ { 1,2 ..., N } indicate n-th of horizontal component in training image, and N is horizontal component in training image
Quantity, k indicate entire image prediction class label, y indicate the true class label of entire image, K indicate pedestrian's classification
Quantity, ε ∈ [0,1] are a smoothing parameters, its effect is to adjust the probability of non-genuine class label, and α > 1 are a super ginsengs
Number, its effect are to determine the probability of different level part.In order to facilitate expression label distribution, can defineFor qk, and it is fixed
JusticeFor qy。
In an embodiment of the present invention, it is 3, α 1.1, ε 0.1 that N, which can be arranged,.
Step S3 trains the deep neural network model using the horizontal component with class label;
Wherein, the step S3 includes the following steps:
Step S31, it is p × p that the horizontal component with class label, which is normalized to pixel size, and wherein p is positive integer;
In an embodiment of the present invention, it is 224 that p, which can be arranged,.
Step S32 trains the deep neural network mould using normalized horizontal component and its corresponding class label
Type.
Wherein, the step S32 includes the following steps:
Step S321, before normalized horizontal component and its class label are sent into the deep neural network model progress
To propagation;
Step S322 carries out backpropagation after propagated forward.
Wherein, the step S321 includes the following steps:
Deep neural network model training parameter is arranged in step S3211;
In an embodiment of the present invention, the training parameter may include training batch size, iterations and study
Rate, for example it is 64 pairs that trained batch size, which can be arranged, setting iterations are 50, wherein the learning rate of preceding 40 networks is set as
0.1, the learning rate of rear 10 networks is set as 0.01.
The loss function of the deep neural network model is arranged in step S3212;
In an embodiment of the present invention, the loss function Lpgr(n) it may be configured as:
Wherein, p (k) indicates that certain level part is predicted to be the probability of kth class.
Propagated forward loss is calculated by the loss function and deep neural network model parameter in step S3213
Value.
Further include that simplified step is carried out for loss function after the step S3212, specifically to reduce calculation amount
For:
Label distribution in conjunction with horizontal component and loss function, simplify the loss function:
Wherein, p (y) and p (k) indicate that certain level part is predicted to be the probability of y classes and kth class respectively:
Wherein xkIndicate the output valve of kth class in full articulamentum, xyIndicate the output valve of y classes in full articulamentum, xiIt indicates
The output valve of i-th class in full articulamentum;
P (y) and p (k) are brought into loss function, obtained:
Wherein, the step S322 includes the following steps:
Step S3221 calculates it for x based on the loss function in propagated forward in backpropagationyDerivative:
Step S3222 updates the depth nerve net according to obtained derivative and stochastic gradient descent algorithm (SGD) training
The parameter of network model.
In an embodiment of the present invention, it may determine that whether deep neural network restrains by checking penalty values.If loss
When value changes seldom in an iterations, it can determine whether that network has been restrained.
Step S4 obtains test chart image set in test phase, every width test image that test image is concentrated is divided into N number of
Horizontal component;
In an embodiment of the present invention, for convenience calculate, can according in training set to the dividing mode of image, by every width
Test image is divided into N number of horizontal component without overlapping from top to bottom.
Step S5 extracts the test image using trained deep neural network model and concentrates every width test image water
The feature of flat part, and connect every Partial Feature to form the final feature of respective image;
Wherein, the step S5 includes the following steps:
The horizontal component of the test image is normalized to p × p sizes by step S51;
Step S52 extracts the test image using trained deep neural network model and concentrates every width test image
The feature of horizontal component;
In the step S52, feature can be extracted in the last one convolutional layer of the deep neural network model, greatly
Small can be 2048 dimensions.
The feature of every part is together in series to form the final feature of respective image by step S53.
In an embodiment of the present invention, when extracting feature horizontal part should be extracted respectively according to the sequence of image from top to bottom
The feature divided, and the feature vector of 2048 × N-dimensional is finally obtained, as N=3, the dimension of feature vector is 2048 × 3=6144
Dimension.
Step S6 obtains images to be recognized and its feature in evaluation stage, and based on images to be recognized and test image
Similarity score between feature calculation images to be recognized and every width test image;
Wherein, the step S6 includes the following steps:
Step S61 calculates the feature vector of images to be recognized and every width test image;
Step S62 calculates the COS distance between two feature vectors, using obtained distance value as similar between image
Spend score.
Step S7, is ranked up similarity score, and the test image of highest scoring is considered and the images to be recognized
It is identical pedestrian, and then obtains the pedestrian of images to be recognized recognition result again.
Using online disclosed pedestrian, identification database is gone as test object, such as on Market-1501 databases again
People identifies that matched accuracy is respectively 83.31% again.It can be seen that the validity of the method for the present invention.
It should be understood that the above-mentioned specific implementation mode of the present invention is used only for exemplary illustration or explains the present invention's
Principle, but not to limit the present invention.Therefore, that is done without departing from the spirit and scope of the present invention is any
Modification, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.In addition, appended claims purport of the present invention
Covering the whole variations fallen into attached claim scope and boundary or this range and the equivalent form on boundary and is repairing
Change example.
Claims (10)
1. a kind of pedestrian's recognition methods again based on depth characteristic study, which is characterized in that the method includes:
Step S1 builds deep neural network model;
Step S2 obtains training image collection, every width training image that the training image is concentrated is divided into N number of horizontal component, and
A class label is distributed for each horizontal component;
Step S3 trains the deep neural network model using the horizontal component with class label;
Step S4 obtains test chart image set, and every width test image that test image is concentrated is divided into N number of horizontal component;
Step S5 extracts the test image using trained deep neural network model and concentrates every width test image horizontal part
The feature divided, and connect every Partial Feature to form the final feature of respective image;
Step S6 obtains images to be recognized and its feature, and the feature calculation based on images to be recognized and test image is to be identified
Similarity score between image and every width test image;
Step S7, is ranked up similarity score, and the test image of highest scoring is considered with the images to be recognized being phase
Same pedestrian, and then obtain the pedestrian of images to be recognized recognition result again.
2. according to the method described in claim 1, it is characterized in that, the step S1 includes the following steps:
Step S11 initializes the parameter of the deep neural network using default residual error network structure;
The neuron number of the full articulamentum of the last one in the deep neural network is replaced with K by step S12, and wherein K is
Positive integer is the quantity of pedestrian's classification;
Flexible largest unit is arranged in step S13 after full articulamentum;
Step S14 builds loss function after flexible largest unit, obtains the deep neural network model.
3. according to the method described in claim 1, it is characterized in that, the step S2 includes the following steps:
Every width training image is divided into N number of horizontal component by step S21 without overlapping from top to bottom;
Step S22 distributes class label for each horizontal component.
4. according to the method described in claim 3, it is characterized in that, the class label q of each horizontal componentpgr(k, n) is indicated
For:
Wherein, n ∈ { 1,2 ..., N } indicate n-th of horizontal component in training image, and N is the number of horizontal component in training image
Amount, k indicate that the class label of entire image prediction, y indicate that the true class label of entire image, K indicate pedestrian's categorical measure,
ε ∈ [0,1] are a smoothing parameters, and α > 1 are a hyper parameters.
5. according to the method described in claim 4, it is characterized in that, the step S3 includes the following steps:
Step S31, it is p × p that the horizontal component with class label, which is normalized to pixel size, and wherein p is positive integer;
Step S32 trains the deep neural network model using normalized horizontal component and its corresponding class label.
6. according to the method described in claim 5, it is characterized in that, the step S32 includes the following steps:
Step S321 is sent into before the deep neural network model carries out normalized horizontal component and its class label to biography
It broadcasts;
Step S322 carries out backpropagation after propagated forward.
7. according to the method described in claim 6, it is characterized in that, the step S321 includes the following steps:
Deep neural network model training parameter is arranged in step S3211;
The loss function of the deep neural network model is arranged in step S3212;
Propagated forward penalty values are calculated by the loss function and deep neural network model parameter in step S3213.
8. the method according to the description of claim 7 is characterized in that the loss function Lpgr(n) it may be configured as:
Wherein, p (k) indicates that certain level part is predicted to be the probability of kth class.
9. the method according to the description of claim 7 is characterized in that further including for loss function after the step S3212
Carry out simplified step.
10. according to the method described in claim 1, it is characterized in that, the step S5 includes the following steps:
The horizontal component of the test image is normalized to p × p sizes by step S51;
Step S52 extracts the test image using trained deep neural network model and concentrates every width test image horizontal
Partial feature;
The feature of every part is together in series to form final feature by step S53.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810191123.6A CN108345866B (en) | 2018-03-08 | 2018-03-08 | Pedestrian re-identification method based on deep feature learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810191123.6A CN108345866B (en) | 2018-03-08 | 2018-03-08 | Pedestrian re-identification method based on deep feature learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108345866A true CN108345866A (en) | 2018-07-31 |
CN108345866B CN108345866B (en) | 2021-08-24 |
Family
ID=62956679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810191123.6A Active CN108345866B (en) | 2018-03-08 | 2018-03-08 | Pedestrian re-identification method based on deep feature learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108345866B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583502A (en) * | 2018-11-30 | 2019-04-05 | 天津师范大学 | A kind of pedestrian's recognition methods again based on confrontation erasing attention mechanism |
CN110070075A (en) * | 2019-05-07 | 2019-07-30 | 中国科学院宁波材料技术与工程研究所 | Pedestrian based on group's SYMMETRY THEORY recognition methods again |
CN110458217A (en) * | 2019-07-31 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image-recognizing method and device, eye fundus image recognition methods and electronic equipment |
CN110728263A (en) * | 2019-10-24 | 2020-01-24 | 中国石油大学(华东) | Pedestrian re-identification method based on strong discrimination feature learning of distance selection |
CN112016443A (en) * | 2020-08-26 | 2020-12-01 | 深圳市商汤科技有限公司 | Method and device for identifying same lines, electronic equipment and storage medium |
CN110458217B (en) * | 2019-07-31 | 2024-04-19 | 腾讯医疗健康(深圳)有限公司 | Image recognition method and device, fundus image recognition method and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268556A (en) * | 2014-09-12 | 2015-01-07 | 西安电子科技大学 | Hyperspectral image classification method based on nuclear low-rank representing graph and spatial constraint |
US20160180195A1 (en) * | 2013-09-06 | 2016-06-23 | Toyota Jidosha Kabushiki Kaisha | Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks |
CN106355171A (en) * | 2016-11-24 | 2017-01-25 | 深圳凯达通光电科技有限公司 | Video monitoring internetworking system |
CN106778527A (en) * | 2016-11-28 | 2017-05-31 | 中通服公众信息产业股份有限公司 | A kind of improved neutral net pedestrian recognition methods again based on triple losses |
CN107019525A (en) * | 2015-12-15 | 2017-08-08 | 柯尼卡美能达株式会社 | Ultrasonic image diagnostic apparatus |
CN107506700A (en) * | 2017-08-07 | 2017-12-22 | 苏州经贸职业技术学院 | Pedestrian's recognition methods again based on the study of broad sense similarity measurement |
CN107563344A (en) * | 2017-09-18 | 2018-01-09 | 天津师范大学 | A kind of pedestrian's recognition methods again that study is estimated based on semantic region |
CN108229444A (en) * | 2018-02-09 | 2018-06-29 | 天津师范大学 | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion |
-
2018
- 2018-03-08 CN CN201810191123.6A patent/CN108345866B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160180195A1 (en) * | 2013-09-06 | 2016-06-23 | Toyota Jidosha Kabushiki Kaisha | Augmenting Layer-Based Object Detection With Deep Convolutional Neural Networks |
CN104268556A (en) * | 2014-09-12 | 2015-01-07 | 西安电子科技大学 | Hyperspectral image classification method based on nuclear low-rank representing graph and spatial constraint |
CN107019525A (en) * | 2015-12-15 | 2017-08-08 | 柯尼卡美能达株式会社 | Ultrasonic image diagnostic apparatus |
CN106355171A (en) * | 2016-11-24 | 2017-01-25 | 深圳凯达通光电科技有限公司 | Video monitoring internetworking system |
CN106778527A (en) * | 2016-11-28 | 2017-05-31 | 中通服公众信息产业股份有限公司 | A kind of improved neutral net pedestrian recognition methods again based on triple losses |
CN107506700A (en) * | 2017-08-07 | 2017-12-22 | 苏州经贸职业技术学院 | Pedestrian's recognition methods again based on the study of broad sense similarity measurement |
CN107563344A (en) * | 2017-09-18 | 2018-01-09 | 天津师范大学 | A kind of pedestrian's recognition methods again that study is estimated based on semantic region |
CN108229444A (en) * | 2018-02-09 | 2018-06-29 | 天津师范大学 | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion |
Non-Patent Citations (6)
Title |
---|
CHRISTIAN SZEGEDY 等: "Rethinking the Inception Architecture for Computer Vision", 《2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
GUO-SEN XIE 等: "LG-CNN: From local parts to global discrimination for fine-grained recognition", 《PATTERN RECOGNITION》 * |
SHUANG LIU 等: "Pedestrian Retrieval via Part-Based Gradation Regularization in Sensor Networks", 《IEEE ACCESS》 * |
YIFAN SUN 等: "Beyond Part Models: Person Retrieval with Refined Part Pooling (and a Strong Convolutional Baseline)", 《网络在线公开: HTTPS://ARXIV.ORG/ABS/1711.09349》 * |
ZHEDONG ZHENG 等: "A Discriminatively Learned CNN Embedding for Person Re-identification", 《ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS》 * |
蒋桧慧 等: "基于特征融合与改进神经网络的行人再识别", 《基于特征融合与改进神经网络的行人再识别》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109583502A (en) * | 2018-11-30 | 2019-04-05 | 天津师范大学 | A kind of pedestrian's recognition methods again based on confrontation erasing attention mechanism |
CN109583502B (en) * | 2018-11-30 | 2022-11-18 | 天津师范大学 | Pedestrian re-identification method based on anti-erasure attention mechanism |
CN110070075A (en) * | 2019-05-07 | 2019-07-30 | 中国科学院宁波材料技术与工程研究所 | Pedestrian based on group's SYMMETRY THEORY recognition methods again |
CN110458217A (en) * | 2019-07-31 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image-recognizing method and device, eye fundus image recognition methods and electronic equipment |
CN110458217B (en) * | 2019-07-31 | 2024-04-19 | 腾讯医疗健康(深圳)有限公司 | Image recognition method and device, fundus image recognition method and electronic equipment |
CN110728263A (en) * | 2019-10-24 | 2020-01-24 | 中国石油大学(华东) | Pedestrian re-identification method based on strong discrimination feature learning of distance selection |
CN110728263B (en) * | 2019-10-24 | 2023-10-24 | 中国石油大学(华东) | Pedestrian re-recognition method based on strong discrimination feature learning of distance selection |
CN112016443A (en) * | 2020-08-26 | 2020-12-01 | 深圳市商汤科技有限公司 | Method and device for identifying same lines, electronic equipment and storage medium |
CN112016443B (en) * | 2020-08-26 | 2022-04-26 | 深圳市商汤科技有限公司 | Method and device for identifying same lines, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108345866B (en) | 2021-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034044B (en) | Pedestrian re-identification method based on fusion convolutional neural network | |
CN107145842B (en) | Face recognition method combining LBP characteristic graph and convolutional neural network | |
CN108345866A (en) | A kind of pedestrian's recognition methods again based on depth characteristic study | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN105975931B (en) | A kind of convolutional neural networks face identification method based on multiple dimensioned pond | |
CN106599797B (en) | A kind of infrared face recognition method based on local parallel neural network | |
CN103258204B (en) | A kind of automatic micro-expression recognition method based on Gabor and EOH feature | |
CN109165566A (en) | A kind of recognition of face convolutional neural networks training method based on novel loss function | |
CN108345860A (en) | Personnel based on deep learning and learning distance metric recognition methods again | |
CN101236608B (en) | Human face detection method based on picture geometry | |
CN108229444A (en) | A kind of pedestrian's recognition methods again based on whole and local depth characteristic fusion | |
CN107239446A (en) | A kind of intelligence relationship extracting method based on neutral net Yu notice mechanism | |
CN104866829A (en) | Cross-age face verify method based on characteristic learning | |
CN106778921A (en) | Personnel based on deep learning encoding model recognition methods again | |
CN108491797A (en) | A kind of vehicle image precise search method based on big data | |
CN106446895A (en) | License plate recognition method based on deep convolutional neural network | |
CN103886305B (en) | Specific face searching method for grassroots policing, safeguard stability and counter-terrorism | |
CN107092894A (en) | A kind of motor behavior recognition methods based on LSTM models | |
CN107341463A (en) | A kind of face characteristic recognition methods of combination image quality analysis and metric learning | |
CN110321862B (en) | Pedestrian re-identification method based on compact ternary loss | |
CN110097053A (en) | A kind of power equipment appearance defect inspection method based on improvement Faster-RCNN | |
CN108960184A (en) | A kind of recognition methods again of the pedestrian based on heterogeneous components deep neural network | |
CN106156765A (en) | safety detection method based on computer vision | |
CN102201236A (en) | Speaker recognition method combining Gaussian mixture model and quantum neural network | |
CN108509939A (en) | A kind of birds recognition methods based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20221130 Address after: 300392 Room 501, Building 1, No. 1, Huixue Road, Xuefu Industrial Zone, Xiqing District, Tianjin Patentee after: Development Anchor (Tianjin) Technology Co.,Ltd. Address before: 300387 Tianjin city Xiqing District West Binshui Road No. 393 Patentee before: TIANJIN NORMAL University |