CN108229492A - Extract the method, apparatus and system of feature - Google Patents
Extract the method, apparatus and system of feature Download PDFInfo
- Publication number
- CN108229492A CN108229492A CN201710195256.6A CN201710195256A CN108229492A CN 108229492 A CN108229492 A CN 108229492A CN 201710195256 A CN201710195256 A CN 201710195256A CN 108229492 A CN108229492 A CN 108229492A
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- region
- fusion
- multiple regions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
This application discloses the method, apparatus and system of extraction feature, the method for extracting feature includes:Generation initial pictures feature corresponding with the object in image;Generation multiple regions feature corresponding with the multiple regions of image;And merge initial pictures feature and multiple regions feature, obtain the target image characteristics of object.The technical solution of extraction feature provided by the embodiments of the present application, during feature is obtained, feature extraction is integrally not only carried out to the image comprising object, feature extraction also is carried out to the multiple regions in image, for the process of multiple regions extraction feature the minutia in multiple region is retained at least partly in this way so that the characteristics of objects finally obtained has more preferable identification.
Description
Technical field
This application involves computer visions and image processing field, and in particular to extracts the method, apparatus and system of feature.
Background technology
With the development of computer vision technique and the growth of image category information quantity, image recognition technology is increasingly
It is applied in more fields, for example, pedestrian retrieval, video monitoring, visual classification etc., and in image recognition technology, feature
It extracts most important.
In traditional image recognition technology, the method that feature extraction is carried out to image is typically the whole of one pictures of extraction
Body characteristics describe the picture, and in global feature, the fine degree of various objects or background in picture is identical.This is whole special
Sign for example can be utilized for picture recognition, and the feature of the global feature and target classification is compared to judge in the picture
Whether object belongs to target classification.
Invention content
The embodiment of the present application provides a kind of technical solution for extracting feature.
The one side of the embodiment of the present application discloses a kind of method for extracting feature, and this method includes:In generation and image
The corresponding initial pictures feature of object;Generation multiple regions feature corresponding with the multiple regions of image;It and will be just
Beginning characteristics of image and multiple regions feature are merged, and obtain the target image characteristics of object.
In one embodiment, initial pictures feature corresponding with the object in image is generated to include:Image is carried out
Convolution and pond generate intermediate image feature;It is and corresponding with the object in image initial based on the generation of intermediate image feature
Characteristics of image.
In one embodiment, multiple regions feature corresponding with the multiple regions of image is generated to include:From figure
As extraction multiple regions;Based on multiple regions, pond is carried out to intermediate characteristics of image, generation is corresponding with multiple regions more respectively
A intermediate region feature and based on multiple intermediate region features generation multiple regions feature corresponding with multiple regions respectively.
In one embodiment, initial pictures feature and multiple regions feature are merged, obtains the target of object
Characteristics of image includes:It is fusion feature by multiple regions Fusion Features;And it is by fusion feature and initial pictures Fusion Features
The target image characteristics of object.
In one embodiment, multiple regions Fusion Features are included for fusion feature:In multiple regions feature
In first overlapping region, feature of the selection with highest identification, as the corresponding feature in the first overlapping region;And based on
The corresponding feature in one overlapping region generates fusion feature.
In one embodiment, by the target image characteristics packet of fusion feature and initial pictures Fusion Features for object
It includes:In fusion feature and the second overlapped overlapping region of initial pictures feature, feature of the selection with highest identification,
As the corresponding feature in the second overlapping region;And based on the corresponding feature in the second overlapping region, generate the target image of object
Feature.
In one embodiment, go out multiple regions from image zooming-out to include:It is divided an image into according to extraction reference point
Multiple regions.
In one embodiment, object includes pedestrian, face or vehicle.
In one embodiment, when object is pedestrian, extraction reference point includes the human body anchor point of pedestrian.
In one embodiment, when object is pedestrian, include from image zooming-out multiple regions:Distinguish from image zooming-out
Region corresponding with the head and shoulder portion of pedestrian, the upper part of the body and the lower part of the body;And based on multiple regions, intermediate characteristics of image is carried out
Chi Hua, multiple intermediate region features corresponding with multiple regions include respectively for generation:Based on respectively with head and shoulder portion, the upper part of the body and
The corresponding multiple regions of the lower part of the body, to intermediate characteristics of image carry out pond, generation respectively with head and shoulder portion, the upper part of the body and the lower part of the body
Corresponding multiple intermediate region features.
In one embodiment, when object be pedestrian when, based on multiple intermediate region features generation respectively with multiple areas
The corresponding multiple regions feature in domain includes:It is special to be based respectively on multiple intermediate regions corresponding with head and shoulder portion, the upper part of the body and the lower part of the body
Sign generates provincial characteristics corresponding with head and shoulder portion, the upper part of the body and the lower part of the body.
In one embodiment, when object is pedestrian, multiple regions Fusion Features are included for fusion feature:It will be with
Corresponding provincial characteristics, provincial characteristics corresponding with the lower part of the body and region spy corresponding with head and shoulder portion are fused to above the waist
Close feature.
In one embodiment, when object is pedestrian, based on the generation of intermediate image feature and the object pair in image
The initial pictures feature answered includes:Convolution and pond are carried out to intermediate characteristics of image, generate the intermediate image feature through convolution;With
And convolution and pond are carried out to the intermediate image feature through convolution, generate initial pictures feature.
In one embodiment, when object is pedestrian, include further including from image zooming-out multiple regions:It is carried from image
Take multiple regions corresponding with the left arm of pedestrian, right arm, left leg and right leg respectively.
In one embodiment, it when object includes pedestrian, is based respectively on and head and shoulder portion, the upper part of the body and the lower part of the body pair
The multiple intermediate region features answered generate provincial characteristics corresponding with head and shoulder portion, the upper part of the body and the lower part of the body and include:Based on respectively with
Left arm, right arm, left leg and the corresponding multiple regions of right leg carry out the intermediate image feature through convolution pondization operation, generation point
Multiple intermediate region features not corresponding with left arm, right arm, left leg and right leg;Respectively pair with head and shoulder portion, the upper part of the body, the lower part of the body,
Left arm, right arm, left leg and the corresponding multiple intermediate region features of right leg carry out convolution and pond, generation and head and shoulder portion, upper half
Body, the lower part of the body, left arm, right arm, left leg and the corresponding intermediate region feature through convolution of right leg;Will respectively with left leg and right leg
The corresponding intermediate region Fusion Features through convolution are leg fusion feature;It will be corresponding with left arm and right arm through convolution respectively
Intermediate region Fusion Features are arm fusion feature;By arm fusion feature and the intermediate region through convolution corresponding with the upper part of the body
Fusion Features are and corresponding provincial characteristics above the waist;By leg fusion feature and the middle area through convolution corresponding with the lower part of the body
Characteristic of field is fused to provincial characteristics corresponding with the lower part of the body;And use the intermediate region feature through convolution corresponding with head and shoulder portion
As provincial characteristics corresponding with head and shoulder portion.
In one embodiment, feature includes characteristic pattern or feature vector.
The technical solution of extraction feature provided by the embodiments of the present application is not only right during characteristics of objects figure is obtained
Image comprising object integrally carries out feature extraction, also feature extraction is carried out to the multiple regions in image, in this way for multiple
The process of extracted region feature causes the minutia in multiple region to be retained at least partly so that pair finally obtained
As feature has more preferable identification.
The another aspect of the embodiment of the present application discloses a kind of device of extracting object feature, which includes:Image is special
Generation module is levied, generates initial pictures feature corresponding with the object in image;Provincial characteristics generation module, generation and image
The corresponding multiple regions feature of multiple regions;And Fusion Module, initial pictures feature and multiple regions feature are carried out
Fusion, obtains the target image characteristics of object.
In one embodiment, characteristics of image generation module includes:Intermediate image feature generate submodule, to image into
Row convolution and pond generate intermediate image feature;And initial pictures feature generation submodule, it is generated based on intermediate image feature
Initial pictures feature corresponding with the object in image.
In one embodiment, provincial characteristics generation module includes:Extracted region submodule, from the multiple areas of image zooming-out
Domain;Feature generation submodule in intermediate region carries out intermediate characteristics of image pond based on multiple regions, generation respectively with multiple areas
The corresponding multiple intermediate region features in domain and provincial characteristics generation submodule, are generated based on multiple intermediate region features and distinguished
Multiple regions feature corresponding with multiple regions.
In one embodiment, Fusion Module includes:Multiple regions Fusion Features are fusion by the first fusion submodule
Feature;And the second fusion submodule, by the target image characteristics that fusion feature and initial pictures Fusion Features are object.
In one embodiment, the first fusion submodule is additionally operable to:The first overlapping region in multiple regions feature
In, feature of the selection with highest identification, as the corresponding feature in the first overlapping region;And based on the first overlapping region pair
The feature answered generates fusion feature.
In one embodiment, the second fusion submodule is additionally operable to:In fusion feature and initial pictures feature phase mutual respect
In the second folded overlapping region, feature of the selection with highest identification, as the corresponding feature in the second overlapping region;And base
In the corresponding feature in the second overlapping region, the target image characteristics of object are generated.
In one embodiment, extracted region submodule is used for:Multiple areas are divided an image into according to extraction reference point
Domain.
In one embodiment, object includes pedestrian, face or vehicle.
In one embodiment, when object is pedestrian, extraction reference point includes the human body anchor point of pedestrian.
In one embodiment, when object is pedestrian, extracted region submodule is used for:From image zooming-out respectively with row
Head and shoulder portion, the upper part of the body and the corresponding region of the lower part of the body of people;And feature generation submodule in intermediate region is used for:Based on respectively
Multiple regions corresponding with head and shoulder portion, the upper part of the body and the lower part of the body, to intermediate characteristics of image carry out pond, generation respectively with head and shoulder
Portion, the upper part of the body and the corresponding multiple intermediate region features of the lower part of the body.
In one embodiment, when object is pedestrian, provincial characteristics generation submodule is used for:It is based respectively on and head and shoulder
Portion, the upper part of the body and the corresponding multiple intermediate region features of the lower part of the body generate area corresponding with head and shoulder portion, the upper part of the body and the lower part of the body
Characteristic of field.
In one embodiment, when object is pedestrian, the first fusion submodule is used for:It will be with corresponding area above the waist
Characteristic of field, provincial characteristics corresponding with the lower part of the body and region spy corresponding with head and shoulder portion are fused to fusion feature.
In one embodiment, when object is pedestrian, initial pictures feature generation submodule is used for:To intermediate image
Feature carries out convolution and pond, generates the intermediate image feature through convolution;And the intermediate image feature through convolution is rolled up
Product and pond generate initial pictures feature.
In one embodiment, when object is pedestrian, extracted region submodule is additionally operable to:From image zooming-out respectively with
Left arm, right arm, left leg and the corresponding multiple regions of right leg of pedestrian.
In one embodiment, when object is pedestrian, provincial characteristics generation submodule is used for:Based on respectively with a left side
Arm, right arm, left leg and the corresponding multiple regions of right leg carry out the intermediate image feature through convolution pondization operation, generation difference
Multiple intermediate region features corresponding with left arm, right arm, left leg and right leg;Respectively pair with head and shoulder portion, the upper part of the body, the lower part of the body, a left side
Arm, right arm, left leg and the corresponding multiple intermediate region features of right leg carry out convolution and pond, generation and head and shoulder portion, the upper part of the body,
The lower part of the body, left arm, right arm, left leg and the corresponding intermediate region feature through convolution of right leg;To be corresponding with left leg and right leg respectively
The intermediate region Fusion Features through convolution be leg fusion feature;By the centre through convolution corresponding with left arm and right arm respectively
Provincial characteristics is fused to arm fusion feature;By arm fusion feature and the intermediate region feature through convolution corresponding with the upper part of the body
It is fused to and corresponding provincial characteristics above the waist;Leg fusion feature and the intermediate region through convolution corresponding with the lower part of the body is special
Sign is fused to provincial characteristics corresponding with the lower part of the body;And use the intermediate region feature conduct through convolution corresponding with head and shoulder portion
Provincial characteristics corresponding with head and shoulder portion.
In one embodiment, feature includes characteristic pattern or feature vector.
The another aspect of the embodiment of the present application also discloses a kind of system of extracting object feature, which includes:Storage
Device stores executable instruction;One or more processors communicate with memory and complete following grasp to perform executable instruction
Make:Generation initial pictures feature corresponding with the object in image;Generation multiple areas corresponding with the multiple regions of image
Characteristic of field;And merge initial pictures feature and multiple regions feature, obtain the target image characteristics of object.
The another aspect of the embodiment of the present application discloses non-transitory computer storage medium, which can
Reading instruction makes processor perform following operate when these instructions are performed:Generation initial graph corresponding with the object in image
As feature;Generation multiple regions feature corresponding with the multiple regions of image;And by initial pictures feature and multiple areas
Characteristic of field is merged, and obtains the target image characteristics of object.
Description of the drawings
Hereinafter, the exemplary and unrestricted embodiment of the application is described with reference to the accompanying drawings.These attached drawings are only
It is illustrative and usually do not represent accurate ratio.The same or similar element is with identical reference numeral table in different attached drawings
Show.
Fig. 1 is the flow chart for the method 1000 for showing the extraction feature according to the application embodiment;
Fig. 2 is the schematic diagram for showing the pedestrian joint for illustrating the method according to the application embodiment;
Fig. 3 is the signal for showing the characteristic pattern extraction process for illustrating the method according to one embodiment of the application
Figure;
Fig. 4 is the signal for showing the characteristic pattern extraction process for illustrating the method according to one embodiment of the application
Figure;
Fig. 5 is the schematic diagram shown for illustrating the fusion process according to one embodiment of the application;
Fig. 6 is the schematic diagram shown for illustrating the fusion process according to one embodiment of the application;
Fig. 7 is the schematic diagram for the device 700 for showing the extraction feature according to the application embodiment;And
Fig. 8 is the schematic diagram for the computer system 800 for being suitable for carrying out the embodiment of the present application.
Specific embodiment
Hereinafter, presently filed embodiment will be described in detail with reference to detailed description and attached drawing.
Fig. 1 is the flow chart for the method 1000 for showing the extraction feature according to the application embodiment.As shown in Figure 1,
Method 1000 includes:Step S1100 generates initial pictures feature corresponding with the object in image;Step S1200, generation with
The corresponding multiple regions feature of multiple regions of image;And step S1300, initial pictures feature and multiple regions is special
Sign is merged, and obtains the target image characteristics of object.
In step S1100, convolution can be passed through by generating the operation of initial pictures feature corresponding with the object in image
Neural network (CNN) realizes that the convolutional neural networks may include multiple sensing modules.When the image for including object to CNN inputs
When, multiple sensing module is used to carry out convolution to the image of input and pondization is handled to obtain the global feature of image, this is whole
Body characteristics are above-mentioned initial pictures feature.In case of object is pedestrian, initial pictures feature can be that reflection is whole
The feature of the characteristics of a pedestrian.It should be noted that the embodiment of the present invention is not limited the object in image, other any objects
Body can be used as object, such as vehicle, face etc..
In step S1200, multiple regions feature corresponding with the multiple regions of image can be generated.In the step
In S1200, the provincial characteristics for the different zones in image can be obtained, for example, can be by CNN to different in image
Region carry out the operations such as convolution and pond come obtain provincial characteristics corresponding with different zones or can first to image integrally into
Then row feature extraction obtains provincial characteristics, still from the different zones in image global feature (for example, initial pictures feature)
The application is without being limited thereto.Multiple regions can be extracted with object-based structure.As shown in Figure 2, in the feelings that object is pedestrian
In condition, can multiple regions be extracted according to organization of human body, for example, can extract out three regions, which is respectively to include
The region 201 of pedestrian head and shoulder (hereinafter, also referred to as head and shoulder portion), the region 202 including pedestrian's upper part of the body and including
The region 203 of pedestrian's lower part of the body, wherein, which can be partially overlapped by each other.
In some embodiments, the multiple region can be divided an image into according to extraction reference point to obtain image
Multiple regions.Extraction reference point can be the tie point of major part in object structure.When object is pedestrian, reference is extracted
Point can be the human body anchor point of pedestrian, for example, the joint of human body.As shown in Figure 2,14 joints of pedestrian can be chosen
Point 1-14 is used as extraction reference point, then extracts multiple regions according to the coordinate of 14 artis 1-14, for example, head
The artis that shoulder includes is artis 1-4, the coordinate of artis 1-4 be for example respectively (5,5), (5,2), (9,1) and (1,
1) one minimum (that is, 1 in coordinate (9,1) and (1,1)) in the ordinate of artis 1-4 and maximum one can, be chosen
The maximum value and minimum value of a (that is, 5 in coordinate (5,5)) as the ordinate in the region 201 comprising head and shoulder portion, can choose pass
One minimum (that is, 1 in coordinate (1,1)) in the abscissa of node 1-4 and maximum one are (that is, in coordinate (9,1)
9) maximum value and minimum value as the abscissa in the region 201 comprising head and shoulder portion, so what is finally obtained includes head and shoulder portion
Region 201 can be the rectangle with coordinate (1,1), (9,1) (9,5) and (1,5) for four angles, can obtain in a similar manner
To region 202 and 203.In embodiments of the present invention, as shown in Fig. 2, artis 1 can be crown artis, artis 2 can be with
For P point, artis 3 can be left shoulder joint node, and artis 4 can be right shoulder joint node, and artis 5 can be a left side
Elbow joint point, artis 6 can be left wrist joint point, and artis 7 can be right elbow joint point, and artis 8 can be that right wrist closes
Node, artis 9 can be left stern artis, and artis 10 can be right stern artis, and artis 11 can be left knee joint
Point, artis 12 can be left foot artis, and artis 13 can be right knee joint point, and artis 14 can be right foot joint
Point.Multiple artis can be by manually choosing, it is possible to use for example passes through the convolutional Neural of reverse phase transmission method pre-training
Network extracts, and when image is input to convolutional neural networks, the convolutional layer in convolutional neural networks network carries out convolution to image,
This feature that can extract the joint area of pedestrian in image is included into characteristic pattern and by the feature of non-pedestrian joint area
Removal or zero, then by characteristic pattern of the characteristic pattern output to represent pedestrian's artis position.
After multiple regions are gone out from image zooming-out, it is right respectively with the multiple regions of image to be generated based on initial pictures feature
The multiple regions feature answered.For example, as shown in Figure 3, initial pictures feature 310 can utilize convolutional neural networks from image 300
Generation, then can generate provincial characteristics corresponding with multiple regions respectively, that is, multiple regions feature from initial pictures feature 310
321-323, wherein, provincial characteristics 321-323 can be characteristic pattern or feature vector.The step can be led to using convolutional neural networks
Interest pool area (Region of Interest pooling, ROI Pooling) is crossed to perform, for example, initial pictures are special
Sign 310 is 96 × 96 characteristic pattern, can be entered into convolutional neural networks, the interest pool area in convolutional neural networks
Layer carries out pond to region corresponding with multiple regions in initial pictures feature respectively and handles to obtain 24 × 24 spy in the region
Sign, this feature as respectively multiple regions feature 321-323 corresponding with multiple regions, it is noted that the ruler of features described above
Very little " 96*96 ", " 24*24 " etc. are merely to illustrate, and present inventive concept is without being limited thereto.In some embodiments it is possible to it is based on
Multiple regions directly extract multiple regions feature corresponding with multiple regions respectively using convolutional neural networks from image 300
321-323.Generated provincial characteristics is enabled to include the correspondence to generate feature for some region in multiple regions
Identification or more fine feature is had more in region.
In another embodiment, step S1100 may include:Convolution and pond are carried out to image, generation intermediate image is special
Sign;And initial pictures feature corresponding with the object in image is generated based on the intermediate image feature.Preferably
In carry out convolution and pond metaplasia into intermediate image feature to image operation the CNN including multiple sensing modules can be used to carry out reality
It is existing.Specifically, the image of object can be input in CNN, then multiple sensing modules in CNN roll up the image of input
To obtain the global feature of image, which is intermediate image feature for product and pondization processing.Intermediate image feature can be with
It is the global feature for including described image, can is characteristic pattern or feature vector.It, can be with after intermediate image feature is obtained
Convolution and pond are carried out to intermediate characteristics of image using CNN, generate initial pictures feature corresponding with the object in image, specifically
Intermediate image feature can be input in CNN by ground, then multiple sensing modules in CNN to the intermediate image feature of input into
To obtain the global feature of intermediate image feature, which is that above-mentioned initial pictures are special for row convolution and pondization processing
Sign.Since initial pictures are characterized in intermediate characteristics of image progress convolution and pond are obtained by convolutional neural networks, so
It may include the feature more finer than intermediate image feature in initial pictures feature.
In the present embodiment, step S1200 may include:From image zooming-out multiple regions;Based on multiple region, centering
Between characteristics of image carry out pond, generation multiple intermediate region features corresponding with multiple regions respectively;And based on multiple centres
Provincial characteristics generates multiple regions feature corresponding with multiple regions respectively.
For from image zooming-out multiple regions the step of, the method described above with reference to Fig. 2 can be used to obtain.Then,
Multiple regions can be based on, pond are carried out to intermediate characteristics of image, multiple intermediate regions corresponding with multiple regions are special respectively for generation
It levies and generates multiple regions feature corresponding with multiple regions respectively based on multiple intermediate region features.It as shown in figure 4, can
It is obtained with being operated using the pond method described with reference to Fig. 3 by the pondization to intermediate characteristics of image 340 from intermediate characteristics of image 340
Multiple intermediate region feature 331-333 corresponding with multiple regions respectively is obtained, then using the perception mould in convolutional neural networks
Block carries out convolution and pond to obtain multiple regions feature corresponding with multiple regions respectively to intermediate provincial characteristics 331-333
Intermediate image feature 340 specifically, can be input in convolutional neural networks, the region of interest in convolutional neural networks by 321-323
Domain pond layer carries out pond to region corresponding with multiple regions in intermediate characteristics of image 340 respectively and handles to obtain the spy in the region
Sign, this feature as respectively multiple intermediate region feature 331-333 corresponding with multiple regions, then using multiple in CNN
Sensing module carries out intermediate provincial characteristics 331-333 convolution and pondization processing to obtain the spy of intermediate region feature 331-333
Sign, this feature is above-mentioned provincial characteristics 321-323.Since provincial characteristics 321-323 is in intermediate region feature 331-333
On the basis of by convolution and pond metaplasia into, so provincial characteristics 321-323 has than intermediate region feature 331-333 more
Fine or with more identification feature.
After initial pictures feature and respectively multiple regions feature corresponding with multiple regions is obtained, in S1300, it can incite somebody to action
Initial pictures feature and multiple regions feature are merged, and obtain the target image characteristics of object.It can be existed according to multiple regions
Corresponding position in original image will multiple regions feature corresponding with multiple region and initial pictures Fusion Features respectively, example
It such as,, can be by the region pair with including head and shoulder portion in the target image characteristics obtained in fusion in the case of object is pedestrian
The Fusion Features in the provincial characteristics answered and head and shoulder portion in initial pictures feature are the feature in the head and shoulder portion in target image characteristics, will
The Fusion Features of the upper part of the body are special for target image in provincial characteristics corresponding with the region including the upper part of the body and initial pictures feature
The feature of the upper part of the body in sign and by lower half in provincial characteristics corresponding with the region including the lower part of the body and initial pictures feature
Feature of the feature of body as the lower part of the body in target image characteristics.It is more fine due to being included at least partly in provincial characteristics
Or feature with more distinguishing, so obtaining target image characteristics by integration region feature and initial pictures feature
Method can effectively improve accuracy rate in the applications such as image identification.It, can be directly by the spy in provincial characteristics in fusion process
The feature as corresponding region in target image characteristics is levied, it can also be partly using the feature in provincial characteristics as target image
The feature of corresponding region in feature, the other feature of the corresponding region can be used in initial pictures feature in target image characteristics
The feature of the corresponding region, for example, using the head feature in the provincial characteristics in head and shoulder portion as head in target image characteristics
Feature, and the shoulder feature in target image characteristics can use the shoulder feature in initial pictures feature.
In one embodiment, initial pictures feature and multiple regions feature are merged to obtain the target figure of object
As feature includes:By multiple regions Fusion Features be fusion feature and by fusion feature and initial pictures Fusion Features for pair
The target image characteristics of elephant.For example, as shown in Figure 5, in the case of object is pedestrian, can first by with including head and shoulder portion
The corresponding provincial characteristics 321 in region, with the corresponding provincial characteristics 322 in region including the upper part of the body and with including the lower part of the body
The corresponding provincial characteristics 323 in region is fused to fusion feature 400, then melts the fusion feature 400 with initial pictures feature 310
It is combined into the target image characteristics 500 of object.
In one embodiment, multiple regions Fusion Features are included for fusion feature:In multiple regions feature
In first overlapping region, feature of the selection with highest identification, as the corresponding feature in the first overlapping region;And based on
The corresponding feature in one overlapping region generates fusion feature.Multiple regions feature may partially overlap each other, the region of the overlapping
The first overlapping region is properly termed as, for example, as shown in Figure 6, the region 201 including pedestrian's head and shoulder portion is with including pedestrian above the waist
Region 202 partly overlap, the region 202 including pedestrian's upper part of the body with partly overlapping including the region 203 of pedestrian's lower part of the body,
Middle provincial characteristics corresponding with region 201-203 respectively is schematically illustrated as the 321-323 in Fig. 6, in provincial characteristics 321-
In 323, feature is represented as numeric form, and the high feature of numerical value has higher identification, that is, represents have with other feature
Higher distinctive feature.In integration region feature 321-323, it can first compare two partly overlapping provincial characteristics in weight
The size of feature in folded region (that is, first overlapping region), using the larger feature of numerical value as the feature of the overlapping region, with area
For characteristic of field 321 and 322, the feature in the dotted line frame of provincial characteristics 321 is significantly greater than in the dotted line frame of provincial characteristics 322
Feature, so using the feature in the dotted line frame of provincial characteristics 321 as the fusion feature Zhong Gai areas after fusion in fusion process
The feature in domain, as shown in the dotted line frame of fusion feature 400 in Fig. 6.Using the above method, fusion as shown in Figure 6 can obtain
Feature 400.Due in fusion process, will retain with the feature of higher identification, and the relatively low feature of identification is eliminated,
Such competitive strategy causes the target image characteristics of the object finally obtained to include the higher feature of identification.Above-mentioned fusion side
Method can be performed by the integrated unit in convolutional neural networks, specifically, by multiple regions feature to be fused or fusion
Feature and initial pictures feature are input in the integrated unit of convolutional neural networks, which will be in multiple input feature
Feature of the larger feature output for the overlapping region, inner product is may also include in the convolutional neural networks in overlapping region
Layer, for the fusion feature layer of output to be converted to the characteristic pattern available for subsequently merging.
In one embodiment, by the target image characteristics packet of fusion feature and initial pictures Fusion Features for object
It includes:In fusion feature and the second overlapped overlapping region of initial pictures feature, feature of the selection with highest identification,
As the corresponding feature in the second overlapping region;And based on the corresponding feature in the second overlapping region, generate the target image of object
Feature.When merging fusion feature and initial pictures feature, since fusion feature is obtained based on provincial characteristics, so fusion
Feature is Chong Die with initial pictures feature, and the region of overlapping is properly termed as the second overlapping region, so in fusion process, equally may be used
To use with being merged with reference to the identical method of the described fusion methods of Fig. 6 to fusion feature with initial pictures feature,
That is, in a second overlap region, in fusion feature and initial pictures feature with peak (that is, with highest identification)
Feature is as the feature in the target image characteristics after fusion at the region, so as to merge the target image characteristics obtained
Feature as object.
Compared with conventional method, it can be obtained according to the method for the application embodiment more fine and with more identification
Target image characteristics because, it is not only right according to the method for the application embodiment during target image characteristics are obtained
Image comprising object integrally carries out feature extraction, also feature extraction is carried out to the multiple regions in image, in this way for multiple
The process of extracted region feature causes the minutia in multiple region to be retained at least partly, in addition, according to the application
The method of embodiment employs a kind of competitive strategy in fusion process, that is, retains the high feature of identification, eliminates identification
Low feature so that the target image characteristics finally obtained have better identification.This should for Object identifying, picture retrieval etc.
With being advantageous, for example, in the application of pedestrian retrieval, if the pedestrian in two pictures wears white shirt and black trousers,
At this time in traditional characteristics of objects extracting method, feature can be integrally extracted to image, at this moment the nuance of two pedestrians is very
It may be ignored in characteristic extraction procedure, and in the method according to the application embodiment, it is multiple to picture due to existing
Region carries out the process of feature extraction, so the minutia in multiple provincial characteristics can be extracted, for example, including
In the region of pedestrian's face, the facial detail feature of pedestrian can be extracted to, and can have pedestrian face in fusion process
The feature for having high identification retains, and can be differentiated two pedestrians according to the difference of two pedestrian's face details in this way, from
And obtain correct result.
It should be noted that the feature in the application, for example, intermediate image feature, initial pictures feature, intermediate region spy
Sign, provincial characteristics, fusion feature etc. can be expressed as the form of characteristic pattern and feature vector.In addition, though make in this application
It is described with the example that object is pedestrian, but the application is without being limited thereto, for example, object can also be face, vehicle etc..
The concrete application when object is pedestrian according to the method for the extraction feature of the application embodiment is described below.
When object is pedestrian, can convolution and Chi Huacao be carried out to the image of pedestrian by CNN in step S1100
To make, generation intermediate image feature (for example, size is 24*24) is then based on intermediate image feature generation initial pictures feature,
Such as convolution and pondization operation are carried out to the intermediate image feature by CNN to obtain initial pictures feature (for example, size is
12*12).Then multiple regions first are extracted from the image of pedestrian in step S1200, for example, extraction respectively with the head of pedestrian
Shoulder, the upper part of the body and the corresponding region of the lower part of the body, then using CNN, based on respectively with head and shoulder portion, the upper part of the body and the lower part of the body pair
The multiple regions answered, carry out intermediate characteristics of image in pond, and generation is corresponding with head and shoulder portion, the upper part of the body and the lower part of the body more respectively
A intermediate region feature (for example, size is 12*12).Then it is based respectively on corresponding with head and shoulder portion, the upper part of the body and the lower part of the body more
A intermediate region feature generates provincial characteristics corresponding with head and shoulder portion, the upper part of the body and the lower part of the body, specifically, can utilize CNN pairs
Multiple intermediate region features corresponding with head and shoulder portion, the upper part of the body and the lower part of the body carry out convolution and pondization operation, generation and head and shoulder
Portion, the upper part of the body and the corresponding provincial characteristics of the lower part of the body.Then, in step S1300, will with corresponding provincial characteristics above the waist,
Provincial characteristics corresponding with the lower part of the body and region spy corresponding with head and shoulder portion are fused to fusion feature, then by fusion feature with
Initial pictures Fusion Features are the target image characteristics of pedestrian.
In another embodiment, the above-mentioned method for carrying out feature extraction to pedestrian can further include to pedestrian four
The step of feature of limb is extracted and is merged.Specifically, the image of pedestrian can be carried out by CNN in step S1100
Convolution and pondization operation, generation intermediate image feature (for example, size is 24*24), then carry out the intermediate image feature
Convolution and pond generate the intermediate image feature (for example, size is 12*12) through convolution, such as by CNN to the intermediate image
Feature carries out convolution and pondization operation to obtain the intermediate image feature through convolution, after this, to the centre through convolution
Characteristics of image carries out convolution and pond, generation initial pictures feature (for example, size is 6*6).Then in step S1200 first from
Region corresponding with the head and shoulder portion of pedestrian, the upper part of the body and the lower part of the body respectively is extracted in the image of pedestrian, then using CNN, is based on
Multiple regions corresponding with head and shoulder portion, the upper part of the body and the lower part of the body respectively, to intermediate characteristics of image carry out pond, generation respectively with
Head and shoulder portion, the upper part of the body and the corresponding multiple intermediate region features (for example, size is 12*12) of the lower part of the body.In this step, may be used
To further comprise:From image zooming-out multiple regions corresponding with the left arm of pedestrian, right arm, left leg and right leg respectively, for example,
Region 204-207 in Fig. 2.The extracted region process can use the method identical with the method with reference to Fig. 2 descriptions.Then base
In multiple regions corresponding with left arm, right arm, left leg and right leg respectively, using such as CNN to the intermediate image feature through convolution
Carry out pondization operation, generate respectively multiple intermediate region features corresponding with left arm, right arm, left leg and right leg (for example, size
For 12*12).Then, using such as CNN respectively pair with head and shoulder portion, the upper part of the body, the lower part of the body, left arm, right arm, left leg and right leg pair
Multiple intermediate region features for answering carry out convolution and pond, generation and head and shoulder portion, the upper part of the body, the lower part of the body, left arm, right arm, left leg,
And the corresponding intermediate region feature (for example, size is 6*6) through convolution of right leg.Obtain it is above-mentioned with head and shoulder portion, the upper part of the body, under
After half body, left arm, right arm, left leg and the corresponding intermediate region feature through convolution of right leg, it can utilize and combine Fig. 6 descriptions
Fusion method by the intermediate region Fusion Features through convolution corresponding with left leg and right leg respectively be leg fusion feature;It will divide
Intermediate region Fusion Features through convolution not corresponding with left arm and right arm are arm fusion feature;By arm fusion feature and with
The corresponding intermediate region Fusion Features through convolution are and corresponding provincial characteristics above the waist above the waist;By leg fusion feature and
The intermediate region Fusion Features through convolution corresponding with the lower part of the body are provincial characteristics corresponding with the lower part of the body;And use and head and shoulder
The corresponding intermediate region feature through convolution in portion is as provincial characteristics corresponding with head and shoulder portion.It then, can in step S1300
With using combine Fig. 6 descriptions fusion method will with corresponding provincial characteristics above the waist, provincial characteristics corresponding with the lower part of the body, with
And region spy corresponding with head and shoulder portion is fused to fusion feature, then by fusion feature and mesh that initial pictures Fusion Features are pedestrian
Logo image feature.
Fig. 7 schematically illustrates the device 700 of the extracting object feature according to the application embodiment.The device packet
It includes:Characteristics of image generation module 710 generates initial pictures feature corresponding with the object in image;Provincial characteristics generation module
720, generate multiple regions feature corresponding with the multiple regions of image;And Fusion Module 730, by initial pictures feature
It is merged with multiple regions feature, obtains the target image characteristics of object.
In one embodiment, characteristics of image generation module 710 includes:Intermediate image feature generates submodule 711, right
Image carries out convolution and pond, generates intermediate image feature;And initial pictures feature generation submodule 712, based on middle graph
As feature generates initial pictures feature corresponding with the object in image.
In one embodiment, provincial characteristics generation module 720 includes:Extracted region submodule 721, from image zooming-out
Multiple regions;Intermediate region feature generation submodule 722, intermediate characteristics of image is carried out pond, generation point based on multiple regions
Multiple intermediate region features not corresponding with multiple regions and provincial characteristics generation submodule 723, based on multiple intermediate regions
Feature generates multiple regions feature corresponding with multiple regions respectively.
In one embodiment, Fusion Module 730 includes:First fusion submodule 731, by multiple regions Fusion Features
For fusion feature;And the second fusion submodule 732, by the target image that fusion feature and initial pictures Fusion Features are object
Feature.
In one embodiment, the first fusion submodule 731 is additionally operable to:The first overlay region in multiple regions feature
In domain, feature of the selection with highest identification, as the corresponding feature in the first overlapping region;And based on the first overlapping region
Corresponding feature generates fusion feature.
In one embodiment, the second fusion submodule 732 is additionally operable to:It is mutual in fusion feature and initial pictures feature
In second overlapping region of overlapping, feature of the selection with highest identification, as the corresponding feature in the second overlapping region;And
Based on the corresponding feature in the second overlapping region, the target image characteristics of object are generated.
In one embodiment, extracted region submodule 721 is used for:It is divided an image into according to extraction reference point multiple
Region.
In one embodiment, object includes pedestrian, face or vehicle.
In one embodiment, when object is pedestrian, extraction reference point includes the human body anchor point of pedestrian.
In one embodiment, when object is pedestrian, extracted region submodule 721 is used for:Distinguish from image zooming-out
Region corresponding with the head and shoulder portion of pedestrian, the upper part of the body and the lower part of the body;And feature generation submodule 722 in intermediate region is used for:Base
In multiple regions corresponding with head and shoulder portion, the upper part of the body and the lower part of the body respectively, pond, generation difference are carried out to intermediate characteristics of image
Multiple intermediate region features corresponding with head and shoulder portion, the upper part of the body and the lower part of the body.
In one embodiment, when object is pedestrian, provincial characteristics generation submodule 723 is used for:Be based respectively on
Head and shoulder portion, the upper part of the body and the corresponding multiple intermediate region feature generations of the lower part of the body are corresponding with head and shoulder portion, the upper part of the body and the lower part of the body
Provincial characteristics.
In one embodiment, when object is pedestrian, the first fusion submodule 731 is used for:It will be corresponding with the upper part of the body
Provincial characteristics, provincial characteristics corresponding with the lower part of the body and region spy corresponding with head and shoulder portion be fused to fusion feature.
In one embodiment, when object is pedestrian, initial pictures feature generation submodule 712 is used for:To centre
Characteristics of image carries out convolution and pond, generates the intermediate image feature through convolution;And to the intermediate image feature through convolution into
Row convolution and pond generate initial pictures feature.
In one embodiment, when object is pedestrian, extracted region submodule 721 is additionally operable to:From image zooming-out point
Multiple regions not corresponding with the left arm of pedestrian, right arm, left leg and right leg.
In one embodiment, when the object is pedestrian, provincial characteristics generation submodule 723 is used for:Based on point
Multiple regions not corresponding with the left arm, the right arm, the left leg and the right leg, to the middle graph through convolution
It is operated as feature carries out pondization, generation is corresponding multiple with the left arm, the right arm, the left leg and the right leg respectively
Intermediate region feature;Respectively pair with the head and shoulder portion, upper part of the body, the lower part of the body, the left arm, the right arm, described
Left leg and the corresponding multiple intermediate region features of the right leg carry out convolution and pond, generation and the head and shoulder portion, the upper half
Body, the lower part of the body, the left arm, the right arm, the left leg and the corresponding intermediate region through convolution of the right leg are special
Sign;It is leg fusion feature by the intermediate region Fusion Features through convolution corresponding with the left leg and the right leg respectively;It will
The intermediate region Fusion Features through convolution corresponding with the left arm and the right arm are arm fusion feature respectively;By the hand
Arm fusion feature and the intermediate region Fusion Features through convolution corresponding with the upper part of the body are special with corresponding region above the waist
Sign;It is and the lower part of the body pair by the leg fusion feature and the intermediate region Fusion Features through convolution corresponding with the lower part of the body
The provincial characteristics answered;And use the intermediate region feature through convolution corresponding with the head and shoulder portion as with the head and shoulder portion pair
The provincial characteristics answered.
In one embodiment, feature includes characteristic pattern or feature vector.
As one of ordinary skill in the understanding, the above-mentioned device 700 for extracting object feature can be used integrated
The form of circuit (IC) is implemented, which includes but not limited to digital signal processor, graphics process integrated circuit, image
Handle integrated circuit and digital audio processing IC etc..Those of ordinary skill in the art can under introduction provided herein
Implement the device 700 of extracting object feature in the form of knowing by using which kind of hardware or software.For example, storage can be used
There is the form of the storage medium of computer executable instructions to implement the application, which realizes above-mentioned respectively
The device 700 of extracting object feature, so as to run to realize their own above-mentioned function by computer.Calculating can also be used
Machine system implements the device 700 of the extracting object feature of the application, and the wherein computer system includes being stored with computer
The memory of executable instruction and the processor to communicate with memory, processor run the executable instruction so as to fulfill more than
The function having with reference to the device 700 of the described extracting object features of Fig. 7.
Referring now to Fig. 8, it illustrates the structure diagrams for the computer system 800 for being suitable for carrying out the embodiment of the present application.Meter
Calculation machine system 800 may include processing unit (such as central processing unit (CPU) 801, image processing unit (GPU) etc.), can root
Random access memory (RAM) 803 is loaded onto according to the program being stored in read-only memory (ROM) 802 or from storage section 808
In program and perform various appropriate actions and process.In RAM 803, can also be stored with system 800 operate it is required
Various programs and data.CPU 801, ROM 802 and RAM 803 are connected to each other by bus 804.Input/output I/O interfaces
805 also connect with bus 804.
Component for that can be connect with I/O interfaces 805 below:Importation 806 including keyboard, mouse etc.;Including cathode
The output par, c 807 of ray tube CRT, liquid crystal display LCD and loud speaker etc.;Storage section 808 including hard disk etc.;And
Include the communications portion 809 of network interface card (such as LAN card and modem).Communications portion 809 can be by such as because of spy
The networks such as net perform communication process.As needed, driver 810 can also be connect with I/O interfaces 805.Such as disk, CD, magneto-optic
The detachable media 811 of disk, semiconductor memory etc. can be mounted on driver 810, in order to from the computer read thereon
Program is mounted into storage section 808 as needed.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of various embodiments of the invention, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, a part for the module, program segment or code include one or more
The executable instruction of logic function as defined in being used to implement.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are practical
On can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also
It is noted that the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart, Ke Yiyong
The dedicated hardware based systems of functions or operations as defined in execution is realized or can be referred to specialized hardware and computer
The combination of order is realized.
Unit or module involved by presently filed embodiment can be implemented by software or hardware.Described unit or
Module may also set up in the processor.The title of these units or module is not construed as limiting these units or module.
The illustrative embodiments and the explanation to institute's application technology principle that above description is only the application.Art technology
Personnel should be appreciated that range involved in the application, however it is not limited to the technical side that the specific combination of above-mentioned technical characteristic forms
Case, while should also cover in the case of without departing substantially from the inventive concept, appointed by above-mentioned technical characteristic or its equivalent feature
Other technical solutions that meaning is combined and formed.Such as features described above and the technical characteristic with similar functions disclosed herein
The technical solution replaced mutually and formed.
Claims (10)
1. a kind of method for extracting feature, including:
Generation initial pictures feature corresponding with the object in image;
Generation multiple regions feature corresponding with the multiple regions of described image;And
The initial pictures feature and multiple provincial characteristics are merged, obtain the target image characteristics of the object.
2. the method as described in claim 1 generates initial pictures feature corresponding with the object in image and includes:
Convolution and pond are carried out to described image, generate intermediate image feature;And
Initial pictures feature corresponding with the object in described image is generated based on the intermediate image feature.
3. method as claimed in claim 2 generates multiple regions feature packet corresponding with the multiple regions of described image
It includes:
Multiple regions are extracted from described image;
Based on the multiple region, pond is carried out to the intermediate image feature, generation is corresponding with the multiple region respectively
Multiple intermediate region features and
Multiple regions feature corresponding with the multiple region respectively is generated based on the multiple intermediate region feature.
4. the method as described in any in claim 1-3, wherein, by the initial pictures feature and multiple provincial characteristics
It is merged, the target image characteristics for obtaining the object include:
Multiple provincial characteristics are fused to fusion feature;And
By the target image characteristics of the fusion feature and the initial pictures Fusion Features for the object.
5. method as claimed in claim 4, wherein, multiple provincial characteristics are fused to fusion feature and are included:
In the first overlapping region in multiple provincial characteristics, feature of the selection with highest identification, as described the
The corresponding feature in one overlapping region;And
Based on the corresponding feature in first overlapping region, the fusion feature is generated.
6. a kind of device for extracting feature, including:
Characteristics of image generation module generates initial pictures feature corresponding with the object in image;
Provincial characteristics generation module generates multiple regions feature corresponding with the multiple regions of described image;And
The initial pictures feature and multiple provincial characteristics are merged, obtain the target of the object by Fusion Module
Characteristics of image.
7. device as claimed in claim 6, wherein, described image feature generation module includes:
Intermediate image feature generates submodule, carries out convolution and pond to described image, generates intermediate image feature;And
Initial pictures feature generates submodule, corresponding with the object in described image just based on intermediate image feature generation
Beginning characteristics of image.
8. device as claimed in claim 7, wherein, the provincial characteristics generation module includes:
Extracted region submodule extracts multiple regions from described image;
Intermediate region feature generation submodule, the intermediate image feature is carried out pond, generation point based on the multiple region
Multiple intermediate region features not corresponding with the multiple region and
Provincial characteristics generates submodule, is generated based on the multiple intermediate region feature corresponding with the multiple region more respectively
A provincial characteristics.
9. the device as described in any in claim 6-8, wherein, the Fusion Module includes:
Multiple provincial characteristics are fused to fusion feature by the first fusion submodule;And
Second fusion submodule, the fusion feature and the initial pictures Fusion Features is special for the target image of the object
Sign.
10. a kind of system for extracting feature, including:
Memory stores executable instruction;
One or more processors communicate with memory and complete following operate to perform executable instruction:
Generation initial pictures feature corresponding with the object in image;
Generation multiple regions feature corresponding with the multiple regions of described image;And
The initial pictures feature and multiple provincial characteristics are merged, obtain the target image characteristics of the object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710195256.6A CN108229492B (en) | 2017-03-29 | 2017-03-29 | Method, device and system for extracting features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710195256.6A CN108229492B (en) | 2017-03-29 | 2017-03-29 | Method, device and system for extracting features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108229492A true CN108229492A (en) | 2018-06-29 |
CN108229492B CN108229492B (en) | 2020-07-28 |
Family
ID=62657374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710195256.6A Active CN108229492B (en) | 2017-03-29 | 2017-03-29 | Method, device and system for extracting features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229492B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109920016A (en) * | 2019-03-18 | 2019-06-21 | 北京市商汤科技开发有限公司 | Image generating method and device, electronic equipment and storage medium |
CN110705345A (en) * | 2019-08-21 | 2020-01-17 | 重庆特斯联智慧科技股份有限公司 | Pedestrian re-identification method and system based on deep learning |
WO2020230244A1 (en) * | 2019-05-13 | 2020-11-19 | 日本電信電話株式会社 | Training method, training program, and training device |
WO2021196718A1 (en) * | 2020-03-30 | 2021-10-07 | 北京市商汤科技开发有限公司 | Key point detection method and apparatus, electronic device, storage medium, and computer program |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631413A (en) * | 2015-12-23 | 2016-06-01 | 中通服公众信息产业股份有限公司 | Cross-scene pedestrian searching method based on depth learning |
-
2017
- 2017-03-29 CN CN201710195256.6A patent/CN108229492B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631413A (en) * | 2015-12-23 | 2016-06-01 | 中通服公众信息产业股份有限公司 | Cross-scene pedestrian searching method based on depth learning |
Non-Patent Citations (2)
Title |
---|
JIONG ZHAO等: "Fusion of Global and Local Features Using KCCA for Automatic Target Recognition", 《2009 FIFTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS》 * |
LIANG ZHENG等: "Good Practice in CNN Feature Transfer", 《ARXIV:1604.00133V1》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109920016A (en) * | 2019-03-18 | 2019-06-21 | 北京市商汤科技开发有限公司 | Image generating method and device, electronic equipment and storage medium |
WO2020230244A1 (en) * | 2019-05-13 | 2020-11-19 | 日本電信電話株式会社 | Training method, training program, and training device |
JPWO2020230244A1 (en) * | 2019-05-13 | 2020-11-19 | ||
JP7173309B2 (en) | 2019-05-13 | 2022-11-16 | 日本電信電話株式会社 | LEARNING METHOD, LEARNING PROGRAM AND LEARNING APPARATUS |
US12094189B2 (en) | 2019-05-13 | 2024-09-17 | Nippon Telegraph And Telephone Corporation | Learning method, learning program, and learning device to accurately identify sub-objects of an object included in an image |
CN110705345A (en) * | 2019-08-21 | 2020-01-17 | 重庆特斯联智慧科技股份有限公司 | Pedestrian re-identification method and system based on deep learning |
WO2021196718A1 (en) * | 2020-03-30 | 2021-10-07 | 北京市商汤科技开发有限公司 | Key point detection method and apparatus, electronic device, storage medium, and computer program |
Also Published As
Publication number | Publication date |
---|---|
CN108229492B (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xie et al. | Shape matching and modeling using skeletal context | |
CN108229492A (en) | Extract the method, apparatus and system of feature | |
US6031539A (en) | Facial image method and apparatus for semi-automatically mapping a face on to a wireframe topology | |
CN111325806A (en) | Clothing color recognition method, device and system based on semantic segmentation | |
CN111080670B (en) | Image extraction method, device, equipment and storage medium | |
US20050234323A1 (en) | Gaze guidance degree calculation system, gaze guidance degree calculation program, storage medium, and gaze guidance degree calculation method | |
JP2004534584A5 (en) | ||
CN107993228B (en) | Vulnerable plaque automatic detection method and device based on cardiovascular OCT (optical coherence tomography) image | |
CN110263768A (en) | A kind of face identification method based on depth residual error network | |
CN110263605A (en) | Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation | |
US20150269759A1 (en) | Image processing apparatus, image processing system, and image processing method | |
CN105869217B (en) | A kind of virtual real fit method | |
CN110276408A (en) | Classification method, device, equipment and the storage medium of 3D rendering | |
CN113393546B (en) | Fashion clothing image generation method based on clothing type and texture pattern control | |
JP2018147313A (en) | Object attitude estimating method, program and device | |
Bang et al. | Estimating garment patterns from static scan data | |
Zheng et al. | Image-based clothes changing system | |
CN112700462A (en) | Image segmentation method and device, electronic equipment and storage medium | |
CN114693570A (en) | Human body model image fusion processing method, device and storage medium | |
GB2503331A (en) | Aligning garment image with image of a person, locating an object in an image and searching for an image containing an object | |
Roy et al. | LGVTON: a landmark guided approach for model to person virtual try-on | |
CN114708617A (en) | Pedestrian re-identification method and device and electronic equipment | |
JP2000099741A (en) | Method for estimating personal three-dimensional posture by multi-eye image processing | |
CN110309729A (en) | Tracking and re-detection method based on anomaly peak detection and twin network | |
CN109840951A (en) | The method and device of augmented reality is carried out for plane map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |