CN107506738A - Feature extracting method, image-recognizing method, device and electronic equipment - Google Patents
Feature extracting method, image-recognizing method, device and electronic equipment Download PDFInfo
- Publication number
- CN107506738A CN107506738A CN201710766531.5A CN201710766531A CN107506738A CN 107506738 A CN107506738 A CN 107506738A CN 201710766531 A CN201710766531 A CN 201710766531A CN 107506738 A CN107506738 A CN 107506738A
- Authority
- CN
- China
- Prior art keywords
- image
- target image
- color
- characteristic vector
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Abstract
A kind of feature extracting method, methods described include:Obtain target image;The target image is divided at least two regions according to the upright direction of personage in the target image;Extract color information of each region in multiple color spaces;The color information of the multiple color spaces extracted from least two region is combined, obtains representing the assemblage characteristic vector of the target image.The present invention also provides a kind of feature deriving means, image-recognizing method and device, electronic equipment.The present invention can extract the feature of image exactly and carry out image recognition and tracking using this feature.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of feature extracting method, image-recognizing method, device
And electronic equipment.
Background technology
With the development of computer technology, the application of image processing techniques is more and more, such as passes through image recognition technology
To identify face, to be tracked personage etc. just need that personage is identified among these by picture charge pattern technology.And
In person recognition, usually great changes have taken place for the posture of human body and action, therefore many features can not use.Feature extraction is image
The important technical links for the treatment of technology, specifically, feature extraction handles image using computer extraction image information to be further
(such as images match, picture charge pattern, person recognition) provides basis.If feature extraction it is sufficiently complete, abundant and inaccurate if
Can not effective transmission figure picture information, follow-up image processing work is had an impact, for example, reduce follow-up image recognition,
The accuracy and efficiency of picture charge pattern.Therefore, a kind of method for the feature for extracting image exactly is needed badly.
The content of the invention
In consideration of it, be necessary to provide a kind of feature extracting method, image-recognizing method, device and electronic equipment, can be accurate
Extract the feature of image and utilize this feature to carry out image recognition and tracking in ground.
One aspect of the present invention provides a kind of feature extracting method, and the feature extracting method includes:
Obtain target image;
The target image is divided at least two regions according to the upright direction of personage in the target image;
Extract color information of each region in multiple color spaces;
The color information of the multiple color spaces extracted from least two region is combined, obtained described in expression
The assemblage characteristic vector of target image.
In a kind of possible implementation, each region of extraction includes in the color information of multiple color spaces:
Each region is extracted in multiple color spaces according to color histogram and the information of each color dimension and uses feature
Vector is indicated.
In a kind of possible implementation, the multiple color spaces that will be extracted from least two region
Color information combination, obtain representing that the assemblage characteristic vector of the target image includes:
By each region in the information combination of each color dimension of each color space, each face in each region is obtained
The assemblage characteristic vector of the colour space;
The assemblage characteristic vector of each color space in each region is spliced, obtains each region multiple
The assemblage characteristic vector of color space;
Each region is spliced in the assemblage characteristic vector of multiple color spaces, the target image is obtained and exists
The assemblage characteristic vector of multiple color spaces at least two region.
In a kind of possible implementation, the feature extracting method also includes:
Dimension-reduction treatment is carried out to the assemblage characteristic vector for representing the target image.
Another aspect of the present invention additionally provides a kind of image-recognizing method, and described image recognition methods includes:
Obtain target image;
The target image is divided at least two regions according to the upright direction of personage in the target image;
Extract color information of each region in multiple color spaces;
The color information of the multiple color spaces extracted from least two region is combined, obtained described in expression
The assemblage characteristic vector of target image;
Calculate the assemblage characteristic vector for representing the target image and sample image in image library characteristic vector it
Between distance to judge the similitude in target image and described image storehouse between sample image;
The similitude highest sample image chosen in described image storehouse with the target image is image recognition result;
The target image is tracked according to described image recognition result.
In a kind of possible implementation, it is described calculate the assemblage characteristic vector for representing the target image with
The distance between characteristic vector of sample image is to judge in target image and described image storehouse between sample image in image library
Similitude before also include:
Dimension-reduction treatment is carried out to the assemblage characteristic vector for representing the target image.
Another aspect of the present invention additionally provides a kind of feature deriving means, and described device includes:
Acquisition module, for obtaining target image;
Region division module, for being divided into the target image according to the upright direction of personage in the target image
At least two regions;
Multiple color spaces processing module, for extracting color information of each region in multiple color spaces;
Feature representation module, for the color information for the multiple color spaces that will be extracted from least two region
Combination, obtain representing the assemblage characteristic vector of the target image.
In a kind of possible implementation, the multiple color spaces processing module is specifically used for:According to color histogram
Extract each region and the information of each color dimension and be indicated in multiple color spaces with characteristic vector.
In a kind of possible implementation, the feature representation module is specifically used for:
By each region in the information combination of each color dimension of each color space, each face in each region is obtained
The assemblage characteristic vector of the colour space;
The assemblage characteristic vector of each color space in each region is spliced, obtains each region multiple
The assemblage characteristic vector of color space;
Each region is spliced in the assemblage characteristic vector of multiple color spaces, the target image is obtained and exists
The assemblage characteristic vector of multiple color spaces at least two region.
In a kind of possible implementation, the feature deriving means also include:
Dimensionality reduction module, for carrying out dimension-reduction treatment to the assemblage characteristic vector for representing the target image.
Another aspect of the invention additionally provides a kind of pattern recognition device, and described device includes:
Acquisition module, for obtaining target image;
Region division module, for being divided into the target image according to the upright direction of personage in the target image
At least two regions;
Multiple color spaces processing module, for extracting color information of each region in multiple color spaces;
Feature representation module, for the color information for the multiple color spaces that will be extracted from least two region
Combination, obtain representing the assemblage characteristic vector of the target image;
Picture recognition module, the assemblage characteristic vector and image after dimensionality reduction for calculating the expression target image
The distance between characteristic vector of sample image to be to judge the similitude between image in storehouse, choose in described image storehouse with it is described
The similitude highest sample image of target image is image recognition result, according to described image recognition result to the target figure
As being tracked.
In a kind of possible implementation, described image identification device also includes:
Dimensionality reduction module, for representing that the assemblage characteristic vector of the target image carries out dimension-reduction treatment.
Further aspect of the present invention also provides a kind of electronic equipment, and the electronic equipment includes memory and processor, described
Memory is used to store at least one instruction, and the processor is used to realize features described above when performing the program stored in memory
Extracting method and/above-mentioned image-recognizing method.
By above technical scheme, in of the invention, electronic equipment can obtain target image;According to people in the target image
The target image is divided at least two regions by the upright direction of thing;Extract color of each region in multiple color spaces
Information;The color information of the multiple color spaces extracted from least two region is combined, obtains representing the mesh
The assemblage characteristic vector of logo image.Color is extracted by the multiple color spaces by target image zoning, and in each region
Information, the color of image is expressed from multiple dimensions, has fully demonstrated the feature and distribution of color of image different piece,
So that no matter the image under which kind of illumination condition can accurately represent image.Carried out by multizone and multiple color spaces
Feature extraction it is achieved thereby that extract the purpose of the feature of image exactly.
The method that the present invention uses the area of space for the direction division image that personage is upright in target image, in each region
In, the color information of multiple color spaces is extracted respectively, the stereogram of each color space is spliced into one-dimensional, is reconnected
The histogram in the region different colours space, and the combination of the histogram acquisition multizone multiple color spaces of connection all areas
Characteristic vector, so as to overcome the limitation in the Tracking Recognition of destination object, increase the information content of color space characteristic, together
When efficient identification destination object spatial information.
Again by the assemblage characteristic vector dimensionality reduction of the multizone multiple color spaces of acquisition, to reduce operand and improve robust
Property, come afterwards further according to the similitude between the assemblage characteristic vector calculating image of the multizone multiple color spaces of acquisition to target
Image is identified and tracked.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, it is required in being described below to embodiment to use
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of feature extracting method provided in an embodiment of the present invention;
Fig. 2 is the exemplary plot that target image is divided at least two regions;
Fig. 3 is a kind of module map of feature deriving means provided in an embodiment of the present invention;
Fig. 4 is the schematic diagram of a kind of electronic equipment provided in an embodiment of the present invention.
Following embodiment will combine above-mentioned accompanying drawing and further illustrate the present invention.
Embodiment
It is below in conjunction with the accompanying drawings and specific real in order to be more clearly understood that the above objects, features and advantages of the present invention
Applying example, the present invention will be described in detail.It should be noted that in the case where not conflicting, embodiments herein and embodiment
In feature can be mutually combined.
Elaborate many details in the following description to facilitate a thorough understanding of the present invention, described embodiment only
Only it is part of the embodiment of the present invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill
The every other embodiment that personnel are obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Unless otherwise defined, all of technologies and scientific terms used here by the article is with belonging to technical field of the invention
The implication that technical staff is generally understood that is identical.Term used in the description of the invention herein is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Embodiment
Fig. 1 is the flow chart of feature extracting method provided in an embodiment of the present invention.Wherein, according to different demands, the stream
The order of step can change in journey figure, and some steps can be omitted.As shown in figure 1, feature extracting method may include following step
Suddenly:
S10:Obtain target image.
Above-mentioned target image is the image of feature extraction to be carried out.For example, carried out to the image captured to video camera
Feature extraction, the then image captured video camera obtain the target image as target image.Meanwhile target image can
To be original image that video camera captures or the original image that video camera captures is pre-processed (such as will
Original image is cut) after obtained image.
Content in target image can include:Personage, animal, building, scenery etc., and the content in target image can
With any combination including the above.
S11:The target image is divided at least two regions according to the upright direction of personage in the target image.
Upright personage's ratio is substantially similar, but because posture is different with action, meeting is divided by the upright direction of personage
There is higher robustness, therefore target image is divided at least two regions according to the upright direction of personage in target image.
The upright direction of above-mentioned personage refers to the direction that people stands.For example, if the direction that people stands in target image is straight
The Y direction of angular coordinate system, then the target image is divided at least two regions according to the Y direction of rectangular coordinate system.If
The Y direction for the direction opposing right angles coordinate system that people stands is deviated to the right 45 degree in target image, then according to opposing right angles coordinate
The target image is divided at least two regions by the direction that the Y direction of system is deviated to the right 45 degree.
Fig. 2 is referred to, Fig. 2 is the exemplary plot that target image is divided at least two regions.Wherein, personage in image 2a
Upright direction is the Y direction of rectangular coordinate system, then is divided image 2a in the Y direction of rectangular coordinate system, image
Represent that image 2a is divided into 5 regions by the Y direction in rectangular coordinate system in 2a by a dotted line.
Meanwhile in other embodiments, target image can also be drawn according to the upright direction of the personage in image
It is divided at least two regions.Continuing with referring to Fig. 2, the upright direction of image 2b personage is the Y direction of rectangular coordinate system in Fig. 2,
Then the upright direction of personage is X-direction in image, and image 2b is divided in the Y direction of rectangular coordinate system,
Represent that image 2b is divided into 5 regions by the X-direction in rectangular coordinate system in image 2b by a dotted line.
In other embodiments, the vertical direction of personage in target image can not also be considered, only according to set in advance
Direction is divided, such as image is carried out into region division according to a certain incline direction.Continuing with referring to Fig. 2, image 2c in Fig. 2
It is according to being in 45 degree of incline directions schematic diagram for being divided target image with the X-axis of rectangular coordinate system, passes through in image 2c
Dotted line represents image 2c is being divided into 5 regions in 45 degree of incline directions with the X-axis of rectangular coordinate system.
In other embodiments, target image is carried out to the division in region according to spatial pyramid model.How according to sky
Between pyramid model zoning can obtain from the prior art, therefore here is omitted.
Meanwhile it can also select aforesaid way that image is carried out into zoning either when target image does not include personage
Other modes are by image zoning.
Because in the target image, when being divided according to the upright direction of personage, each region after division can include
The information at the more position of personage, so being divided according to the upright direction of personage and then extracting feature can more react complete
Image information.For example when being divided by the upright direction of personage, each region may include some of personage, such as head, personage
The upper part of the body and personage's lower part of the body;If by vertically being divided with the upright direction of personage, a region may only include the one of personage
Individual part, such as a region only include the head of personage, and another region only includes the upper limbs part of personage, and another region only includes people
The foot of thing.
For example, the vertical direction of personage is the Y direction of rectangular coordinate system in image a and image b, image a is in red
Clothing black trousers, image b is black jacket red trousers, if now being divided according to the X-direction of rectangular coordinate system,
When color extraction is carried out after being divided to image a and image b, image a is similar with the color characteristic that image b is extracted, image a
Identical image can be identified as with image b, and actual epigraph a and image the b image that to be two width different, the spy now extracted
Levy it is not accurate enough, and then influence image recognition recognition result it is also inaccurate, reduce judgment accuracy.
Further, when carrying out region division to target image, target image can be divided into 3-5 parts.Divided
More regions may increase the complexity of computing so as to reduce operation efficiency, and dividing that very few region may be not enough to completely will be every
Individual image makes a distinction.Target image is divided into 3-5 region enough can partly to make a distinction the every of target image, but
What will not be divided again is excessive.At zoning, every region can be the region of decile, or the not region of decile.
To each section can be marked after target image zoning, identified by marking ready-portioned each
Region.Such as target image is divided into 3 parts, respectively region1, region2 and region3.
If not carrying out region division to target image, the dimensional orientation that possibly can not fully demonstrate object in target image is special
Point, so as to reduce the accuracy of characterization image.For example, image c is the pedestrian of red jacket black cap, image d is red trousers
The pedestrian of sub- black handbag, if only extracting colouring information, the color that image c and image d is extracted to image c and image d
Feature is similar, and actual epigraph c and image the d image that to be two width different.Therefore, table is only not enough to by colouring information extraction
The complete information of existing image, and need first to carry out image into the division in region.
S12:Extract color information of each region in multiple color spaces.
Above-mentioned color space is also referred to as color model, can be any number of color spaces in existing color space.Face
The colour space includes but is not limited to RGB color, CMY color spaces, YUV color spaces, hsv color space, HIS colors sky
Between, Lab color spaces.
As an alternative embodiment, in one embodiment, the multiple color space can include:RGB face
The colour space, hsv color space, Lab color spaces.
I.e. multiple color spaces can be three color spaces or more than three color spaces, if three color skies
Between, then respectively RGB color, hsv color space, Lab color spaces.
Wherein, RGB is the color space used in display, the three primary colors definition based on light.HSV is according to the straight of color
A kind of color space that characteristic creates is seen, the parameter in this color space is respectively:Tone (H), saturation degree (S), brightness
(V).Lab is the color space established on the basis of the color measurements international standard that International Commission on Illumination (CIE) formulates, and is
A kind of and device-independent color space, and a kind of color system based on physiological characteristic, it is retouched with digitized mode
State the visual response of people.Therefore, by extract RGB color, hsv color space, Lab color spaces color information can be with
Closer to the visual experience of people so that digitized Color Expression is consistent with the impression of people.
For example, the color information for extracting each region in RGB color refers to:Each region is represented with three primary colors
Colouring information a, specifically, pixel on each region can be represented by one group of R, G, B value.Extract each area
Domain multiple color spaces color information, specifically by each region multiple color spaces color information with digitized
Form is indicated, such as color information of each region in multiple color spaces is represented by characteristic vector.
As an alternative embodiment, in one embodiment, each region of extraction is in multiple color spaces
Color information include:
Each region is extracted in multiple color spaces according to color histogram and the information of each color dimension and uses feature
Vector is indicated.
Wherein, color histogram can in response diagram picture color composition distribution, that is, there is which color and various face
The probability of occurrence of color, above-mentioned color dimension are also referred to as Color Channel either color component.
The information that each region each color dimension in multiple color spaces is extracted according to color histogram is specifically pair
Multiple color spaces in each region carry out the extraction of the information of color dimension, represent to extract by characteristic vector
The color characteristic of each color space arrived.The purpose being indicated by characteristic vector is for the ease of subsequently being calculated.
When being extracted according to color histogram, the color histogram in each region is first calculated.Calculate color histogram
Need color space being divided into multiple small color intervals, each small color interval is referred to as a handle (bin) of histogram.
This process is referred to as color quantizing.Then falling the pixel quantity in each small color interval by calculating color can obtain
Color histogram.In general, the number of small color interval (bin) is more, and histogram is stronger to the resolution capability of color.
However, the color histogram in a large number of handle can not only increase computation burden, also it is unfavorable for establishing rope in large-scale image library
Draw.
In one embodiment of the invention, RGB color and the handle number in hsv color space are arranged to 8, Lab colors
The handle number in space is arranged to 16.
For example, RBG is made up of tri- dimensions of R, G, B or three passages, each dimension sets 8 handles.HSV is by H, S, V
Three dimensions are formed, and each dimension similarly sets 8 handles.By L (brightness), a, (color is from bottle green (low brightness values) to Lab
Arrived again bright pink (high luminance values) to grey (middle brightness value)), b (color be from sapphirine (low brightness values) to grey (in it is bright
Angle value) yellow (high luminance values) is arrived again) three dimensions form, each dimension sets 16 handles.
If target image is divided into these three regions of region1, region2 and region3, in a certain region
(region1) information of each color dimension and it is expressed as in, in the RGB color extracted with characteristic vector:
[r1, r2, r3, r4, r5, r6, r7, r8];[g1, g2, g3, g4, g5, g6, g7, g8];[b1, b2, b3, b4, b5, b6, b7, b8]
Wherein, r1Be region1 color histogram in a certain color interval a certain pixel R values or
The average value of the R values of all pixels point of a certain color interval in region1 color histogram, or according to region1
Color histogram in a certain color interval presetted pixel point R value and the value that is calculated of preset formula.Likewise, g1
Be region1 color histogram in a certain color interval a certain pixel G value or region1 color
In the average value of the G values of all pixels point of a certain color interval in histogram, or color histogram according to region1
The value that the G of the presetted pixel point of a certain color interval value and preset formula is calculated.
Likewise, in the hsv color space extracted and Lab color spaces the information of each color dimension and with feature to
Measuring expression is respectively:
[h1, h2, h3, h4, h5, h6, h7, h8];[s1, s2, s3, s4, s5, s6, s7, s8];[v1, v2, v3, v4, v5, v6, v7, v8]
[l1, l2, l3..., l14, l15, l16];[a1, a2, a3..., a14, a15, a16];[b1, b2, b3..., b14, b15,
b16]
Wherein, each occurrence (such as h in characteristic vector1And a1) acquisition methods can be empty according to foregoing extraction RGB color
Between in each color dimension information method it is consistent, or can according to extraction RGB color in each color dimension
Information result and color space conversion formula changed, wherein color space conversion formula can be empty from existing color
Between obtain in conversion formula.
S13:The color information of the multiple color spaces extracted from least two regions is combined, obtained described in expression
The assemblage characteristic vector of target image.
Each region is being extracted after the color information of multiple color spaces, by multiple regions in multiple color spaces
Color information is combined.For example, color information of each region in multiple color spaces is represented by characteristic vector, then will
Multiple characteristic vectors are spliced, to obtain representing the assemblage characteristic vector of target image, or the mathematics for passing through kernel function
Method is combined, and obtains representing the assemblage characteristic vector of target image.
As an alternative embodiment, in one embodiment, step S13 will carry from least two region
The color information combination for the multiple color spaces got, obtains representing that the assemblage characteristic vector of the target image includes:
By each region in the information combination of each color dimension of each color space, each face in each region is obtained
The assemblage characteristic vector of the colour space;
The assemblage characteristic vector of each color space in each region is spliced, obtains each region multiple
The assemblage characteristic vector of color space;
Each region is spliced in the assemblage characteristic vector of multiple color spaces, the target image is obtained and exists
The assemblage characteristic vector of multiple color spaces at least two region.
Wherein, each region is obtained in each region in the information combination of each color dimension of each color space
The assemblage characteristic vector of each color space, as this region of region1, it can will represent that the region is empty in each color
Between the characteristic vector of information of each color dimension spliced, obtain the characteristic vector of each color space in the region:
[r1, r2..., r7, r8, g1, g2..., g7, g8, b1, b2..., b7, b8]
[h1, h2..., h7, h8, s1, s2..., s7, s8, v1, v2..., v7, v8]
[l1, l2..., l15, l16, a1, a2..., a15, a16, b1, b2..., b16]
Then, the assemblage characteristic vector of each color space in each region is spliced, i.e., by the institute in a region
The characteristic vector for having color space is spliced into an assemblage characteristic vector, for this region of region1, can will represent each face
The characteristic vector of the information of the colour space is spliced, and obtains assemblage characteristic vector of the region in multiple color spaces:
[r1..., r8, g1..., g8, b1..., b8, h1..., h8, s1..., s8, v1..., v8, l1..., l8,
a1..., a8, b1..., b8]
=[RGB1HSV1LAB1]
Each region after target image division can obtain the combination spy in multiple color spaces by above step
Sign vector, then spliced assemblage characteristic vector of each region in multiple color spaces, obtain target image at least again
The assemblage characteristic vector of multiple color spaces in two regions:
[Region1, Region2, Region3]
=[RGB1HSV1LAB1, RGB2HSV2LAB2, RGB3HSV3LAB3]
By above step, the feature extraction to target image is completed, has obtained representing the assemblage characteristic of target image
Vector.
As an alternative embodiment, in one embodiment, feature extracting method provided by the invention also includes:
To representing that the assemblage characteristic vector of the target image carries out dimension-reduction treatment.
After obtaining representing the assemblage characteristic vector of target image, in order that expression formula is more succinct, it is easy to subsequently use,
The assemblage characteristic vector for representing target image can be subjected to dimension-reduction treatment, to reduce computational complexity during follow-up use, improved
Operation efficiency.
For example, dimension reduction method has principal component analysis (Principal Component Analysis, PCA), PCA can be passed through
To representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
The method of dimensionality reduction is not limited to above-mentioned PCA simultaneously, can also be that opposite feature elimination, number of combinations etc. arbitrarily drop
Dimension method is to representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
Further, in one embodiment, the group for the target image that feature extracting method provided by the invention is extracted
Close characteristic vector and can be used for image recognition and tracking, be specially:
Calculate the assemblage characteristic vector for representing the target image and sample image in image library characteristic vector it
Between distance to judge the similitude between image;
The similitude highest sample image chosen in described image storehouse with the target image is image recognition result;
The target image is tracked according to described image recognition result.
Wherein it is possible to the expression before the vectorial either dimensionality reduction of the assemblage characteristic of the expression target image after calculating dimensionality reduction
The distance between characteristic vector of sample image in the assemblage characteristic vector and image library of target image.
Preferably, the vectorial spy with sample image in image library of the assemblage characteristic for representing target image after dimensionality reduction is calculated
The distance between vector is levied, can further improve the efficiency of computing, reduces computational complexity.
Wherein, the information of multiple sample images can be included in image library.For example, image library includes multiple sample images
Title, numbering and characteristic vector etc..Such as the characteristic vector of title including party A-subscriber's image, numbering and party A-subscriber's image, pass through A
The title of user images can in image library the unique mark sample, table can be used for by the characteristic vector of party A-subscriber's image
Up to the feature of party A-subscriber's image.
Enter row distance calculate when, can be calculated by range formula, range formula include but is not limited to it is European away from
From with a distance from, Ming Shi, mahalanobis distance.Which specifically used range formula can be selected as needed.Range formula can be from now
Have in technology and obtained, here is omitted.There are the method for different determination similitudes, example for different range formulas
Such as, the distance value obtained according to Euclidean distance is smaller, shows that the similarity between image is bigger.
Calculate represent target image assemblage characteristic vector and image library in sample image characteristic vector between away from
From when, the assemblage characteristic vector and the characteristic vectors of multiple sample images in image library that represent target image can be carried out respectively
Calculate, with acquisition and target image similarity highest image, so as to obtain image recognition result, i.e., whether wrapped in target image
Containing a certain destination object.
For example, M images are carried out into feature extraction, obtain representing the assemblage characteristic vector x of M images, if image library includes A
User images, party B-subscriber's image, C user images, the wherein characteristic vector of party A-subscriber's image are ya, the characteristic vector of party B-subscriber's image
For yb, the characteristic vector of C user images is yc, then characteristic vector x and y are calculated respectively according to Euclidean distancea, characteristic vector x with
yb, characteristic vector x and ycDistance.If characteristic vector x and y is calculatedbDistance it is minimum, identification party B-subscriber's image is and target
Image is most like, and target image includes party B-subscriber.
After obtaining to the recognition result of target image, target image can be tracked according to image recognition result,
Particularly the destination object recognized in target image is tracked.For example, the party B-subscriber in M images is tracked.
When carrying out image trace, existing image tracking algorithm can be selected to be tracked, here is omitted.
Assemblage characteristic vector due to representing target image is the result of feature extraction, can be used to indicate that target image,
Therefore the content that target image includes can be identified and is tracked according to the result of feature extraction.
If the degree of accuracy of the feature extraction to target image is higher, image recognition and the degree of accuracy of image trace can be got over
Height, if the degree of accuracy of feature extraction is not high, image recognition and the degree of accuracy of image trace can also reduce.
Meanwhile in addition to carrying out image recognition and image trace, other figures beyond image recognition can also be carried out
As processing operation, for example, image retrieval, feature extraction for target image it is more accurate, then for subsequently carrying out other figures
The degree of accuracy and efficiency during as processing operation is higher.
Feature extracting method provided in an embodiment of the present invention, obtain target image;It is straight according to personage in the target image
The target image is divided at least two regions by vertical direction;The color that each region is extracted in multiple color spaces is believed
Breath;The color information of the multiple color spaces extracted from least two region is combined, obtains representing the target
The assemblage characteristic vector of image.Believed by the multiple color spaces extraction color by target image zoning, and in each region
Breath, is expressed the color of image from multiple dimensions, has fully demonstrated the feature and distribution of color of image different piece, made
Obtain no matter the image under which kind of illumination condition can accurately represent image.Carried out by multizone and multiple color spaces special
Sign extraction it is achieved thereby that extract the purpose of the feature of image exactly.
The method that the present invention uses the area of space for the direction division image that personage is upright in target image, in each region
In, the color information of multiple color spaces is extracted respectively, the stereogram of each color space is spliced into one-dimensional, is reconnected
The histogram in the region different colours space, and the combination of the histogram acquisition multizone multiple color spaces of connection all areas
Characteristic vector, so as to overcome the limitation in the Tracking Recognition of destination object, increase the information content of color space characteristic, together
When efficient identification destination object spatial information.
Again by the assemblage characteristic vector dimensionality reduction of the multizone multiple color spaces of acquisition, to reduce operand and improve robust
Property, come afterwards further according to the similitude between the assemblage characteristic vector calculating image of the multizone multiple color spaces of acquisition to target
Image is identified and tracked.
Embodiment
The present invention also provides a kind of image-recognizing method, and described image recognition methods includes:
Obtain target image;
The target image is divided at least two regions according to the upright direction of personage in the target image;
Extract color information of each region in multiple color spaces;
The color information of the multiple color spaces extracted from least two region is combined, obtained described in expression
The assemblage characteristic vector of target image;
Calculate the assemblage characteristic vector for representing the target image and sample image in image library characteristic vector it
Between distance to judge the similitude in target image and described image storehouse between sample image;
The similitude highest sample image chosen in described image storehouse with the target image is image recognition result;
The target image is tracked according to described image recognition result.
As a preferred embodiment, the present invention is calculating the assemblage characteristic vector for representing the target image
Before the distance between characteristic vector of sample image in image library, in addition to:To representing that the combination of the target image is special
Sign vector carries out dimension-reduction treatment.
After obtaining representing the assemblage characteristic vector of target image, in order that expression formula is more succinct, it is easy to subsequently use,
The assemblage characteristic vector for representing target image can be subjected to dimension-reduction treatment, to reduce computational complexity during follow-up use, improved
Operation efficiency.
The method of the dimensionality reduction has principal component analysis (Principal Component Analysis, PCA), can pass through
PCA is to representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
The method of dimensionality reduction is not limited to above-mentioned PCA simultaneously, can also be that opposite feature elimination, number of combinations etc. arbitrarily drop
Dimension method is to representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
Relevant description for image-recognizing method refer to the description of correlation step in features described above extracting method.Herein
Repeat no more.
Embodiment
Fig. 3 is the structure chart of feature deriving means provided in an embodiment of the present invention, as shown in figure 3, feature deriving means can
With including:Acquisition module 201, region division module 202, multiple color spaces processing module 203 and feature representation module 204.
Acquisition module 201, for obtaining target image.
Above-mentioned target image is the image of feature extraction to be carried out.For example, carried out to the image captured to video camera
Feature extraction, the then image captured video camera obtain the target image as target image.Meanwhile target image can
To be original image that video camera captures or the original image that video camera captures is pre-processed (such as will
Original image is cut) after obtained image.
Content in target image can include:Personage, animal, building, scenery etc., and the content in target image can
With any combination including the above.
Region division module 202, for drawing the target image according to the upright direction of personage in the target image
It is divided at least two regions.
Upright personage's ratio is substantially similar, but because posture is different with action, meeting is divided by the upright direction of personage
There is higher robustness, therefore target image is divided at least two regions according to the upright direction of personage in target image.
The upright direction of above-mentioned personage refers to the direction that people stands.For example, if the direction that people stands in target image is straight
The Y direction of angular coordinate system, then the target image is divided at least two regions according to the Y direction of rectangular coordinate system.If
The Y direction for the direction opposing right angles coordinate system that people stands is deviated to the right 45 degree in target image, then according to opposing right angles coordinate
The target image is divided at least two regions by the direction that the Y direction of system is deviated to the right 45 degree.
Fig. 2 is referred to, Fig. 2 is the exemplary plot that target image is divided at least two regions.Wherein, personage in image 2a
Upright direction is the Y direction of rectangular coordinate system, then is divided image 2a in the Y direction of rectangular coordinate system, image
Represent that image 2a is divided into 5 regions by the Y direction in rectangular coordinate system in 2a by a dotted line.
Meanwhile in other embodiments, target image can also be drawn according to the upright direction of the personage in image
It is divided at least two regions.Continuing with referring to Fig. 2, the upright direction of image 2b personage is the Y direction of rectangular coordinate system in Fig. 2,
Then the upright direction of personage is X-direction in image, and image 2b is divided in the Y direction of rectangular coordinate system,
Represent that image 2b is divided into 5 regions by the X-direction in rectangular coordinate system in image 2b by a dotted line.
In other embodiments, the vertical direction of personage in target image can not also be considered, only according to set in advance
Direction is divided, such as image is carried out into region division according to a certain incline direction.Continuing with referring to Fig. 2, image 2c in Fig. 2
It is according to being in 45 degree of incline directions schematic diagram for being divided target image with the X-axis of rectangular coordinate system, passes through in image 2c
Dotted line represents image 2c is being divided into 5 regions in 45 degree of incline directions with the X-axis of rectangular coordinate system.
In other embodiments, target image is carried out to the division in region according to spatial pyramid model.How according to sky
Between pyramid model zoning can obtain from the prior art, therefore here is omitted.
Meanwhile it can also select aforesaid way that image is carried out into zoning either when target image does not include personage
Other modes are by image zoning.
Because in the target image, when being divided according to the upright direction of personage, each region after division can include
The information at the more position of personage, so being divided according to the upright direction of personage and then extracting feature can more react complete
Image information.For example when being divided by the upright direction of personage, each region may include some of personage, such as head, personage
The upper part of the body and personage's lower part of the body;If by vertically being divided with the upright direction of personage, a region may only include the one of personage
Individual part, such as a region only include the head of personage, and another region only includes the upper limbs part of personage, and another region only includes people
The foot of thing.
For example, the vertical direction of personage is the Y direction of rectangular coordinate system in image a and image b, image a is in red
Clothing black trousers, image b is black jacket red trousers, if now being divided according to the X-direction of rectangular coordinate system,
When color extraction is carried out after being divided to image a and image b, image a is similar with the color characteristic that image b is extracted, image a
Identical image can be identified as with image b, and actual epigraph a and image the b image that to be two width different, the spy now extracted
Levy it is not accurate enough, and then influence image recognition recognition result it is also inaccurate, reduce judgment accuracy.
Further, when carrying out region division to target image, target image can be divided into 3-5 parts.Divided
More regions may increase the complexity of computing so as to reduce operation efficiency, and dividing that very few region may be not enough to completely will be every
Individual image makes a distinction.Target image is divided into 3-5 region enough can partly to make a distinction the every of target image, but
What will not be divided again is excessive.At zoning, each region can be the region of decile, or the not region of decile.
To each section can be marked after target image zoning, identified by marking ready-portioned each
Region.Such as target image is divided into 3 parts, respectively region1, region2 and region3.
If not carrying out region division to target image, the dimensional orientation that possibly can not fully demonstrate object in target image is special
Point, so as to reduce the accuracy of characterization image.For example, image c is the pedestrian of red jacket black cap, image d is red trousers
The pedestrian of sub- black handbag, if only extracting colouring information, the color that image c and image d is extracted to image c and image d
Feature is similar, and actual epigraph c and image the d image that to be two width different.Therefore, table is only not enough to by colouring information extraction
The complete information of existing image, and need first to carry out image into the division in region.
Multiple color spaces processing module 203, for extracting color information of each region in multiple color spaces.
Above-mentioned color space is also referred to as color model, can be any number of color spaces in existing color space.Face
The colour space includes but is not limited to RGB color, CMY color spaces, YUV color spaces, hsv color space, HIS colors sky
Between, Lab color spaces.
As an alternative embodiment, in one embodiment, the multiple color space can include:RGB face
The colour space, hsv color space, Lab color spaces.
I.e. multiple color spaces can be three color spaces or more than three color spaces, if three color skies
Between, then respectively RGB color, hsv color space, Lab color spaces.
Wherein, RGB is the color space used in display, the three primary colors definition based on light.HSV is according to the straight of color
A kind of color space that characteristic creates is seen, the parameter in this color space is respectively:Tone (H), saturation degree (S), brightness
(V).Lab is the color space established on the basis of the color measurements international standard that International Commission on Illumination (CIE) formulates, and is
A kind of and device-independent color space, and a kind of color system based on physiological characteristic, it is retouched with digitized mode
State the visual response of people.Therefore, by extract RGB color, hsv color space, Lab color spaces color information can be with
Closer to the visual experience of people so that digitized Color Expression is consistent with the impression of people.
For example, the color information for extracting each region in RGB color refers to:Each region is represented with three primary colors
Colouring information a, specifically, pixel on each region can be represented by one group of R, G, B value.Extract each area
Domain multiple color spaces color information, specifically by each region multiple color spaces color information with digitized
Form is indicated, such as color information of each region in multiple color spaces is represented by characteristic vector.
As an alternative embodiment, in one embodiment, multiple color spaces processing module 203 is specifically used for:
Each region is extracted according to color histogram in multiple color spaces the information of each color dimension and to be carried out with characteristic vector
Represent.
Wherein, color histogram can in response diagram picture color composition distribution, that is, there is which color and various face
The probability of occurrence of color, above-mentioned color dimension are also referred to as Color Channel either color component.
The information that each region each color dimension in multiple color spaces is extracted according to color histogram is specifically pair
Multiple color spaces in each region carry out the extraction of the information of color dimension, represent to extract by characteristic vector
The color characteristic of each color space arrived.The purpose being indicated by characteristic vector is for the ease of subsequently being calculated.
When being extracted according to color histogram, the color histogram in each region is first calculated.Calculate color histogram
Need color space being divided into multiple small color intervals, each small color interval is referred to as a handle (bin) of histogram.
This process is referred to as color quantizing.Then falling the pixel quantity in each small color interval by calculating color can obtain
Color histogram.In general, the number of small color interval (bin) is more, and histogram is stronger to the resolution capability of color.
However, the color histogram in a large number of handle can not only increase computation burden, also it is unfavorable for establishing rope in large-scale image library
Draw.
In one embodiment of the invention, RGB color and the handle number in hsv color space are arranged to 8, Lab colors
The handle number in space is arranged to 16.
For example, RBG is made up of tri- dimensions of R, G, B or three passages, each dimension sets 8 handles.HSV is by H, S, V
Three dimensions are formed, and each dimension similarly sets 8 handles.By L (brightness), a, (color is from bottle green (low brightness values) to Lab
Arrived again bright pink (high luminance values) to grey (middle brightness value)), b (color be from sapphirine (low brightness values) to grey (in it is bright
Angle value) yellow (high luminance values) is arrived again) three dimensions form, each dimension sets 16 handles.
If target image is divided into these three regions of region1, region2 and region3, in a certain region
(region1) information of each color dimension and it is expressed as in, in the RGB color extracted with characteristic vector:
[r1, r2, r3, r4, r5, r6, r7, r8];[g1, g2, g3, g4, g5, g6, g7, g8];[b1, b2, b3, b4, b5, b6, b7, b8]
Wherein, r1Be region1 color histogram in a certain color interval a certain pixel R values or
The average value of the R values of all pixels point of a certain color interval in region1 color histogram, or according to region1
Color histogram in a certain color interval presetted pixel point R value and the value that is calculated of preset formula.Likewise, g1
Be region1 color histogram in a certain color interval a certain pixel G value or region1 color
In the average value of the G values of all pixels point of a certain color interval in histogram, or color histogram according to region1
The value that the G of the presetted pixel point of a certain color interval value and preset formula is calculated.
Likewise, in the hsv color space extracted and Lab color spaces the information of each color dimension and with feature to
Measuring expression is respectively:
[h1, h2, h3, h4, h5, h6, h7, h8];[s1, s2, s3, s4, s5, s6, s7, s8];[v1, v2, v3, v4, v5, v6, v7, v8]
[l1, l2, l3..., l14,l15, l16];[a1, a2, a3..., a14, a15, a16];[b1, b2, b3..., b14, b15,
b16]
Wherein, each occurrence (such as h in characteristic vector1And a1) acquisition methods can be empty according to foregoing extraction RGB color
Between in each color dimension information method it is consistent, or can according to extraction RGB color in each color dimension
Information result and color space conversion formula changed, wherein color space conversion formula can be empty from existing color
Between obtain in conversion formula.
Feature representation module 204, for the color for the multiple color spaces that will be extracted from least two region
Information combination, obtain representing the assemblage characteristic vector of the target image.
Each region is being extracted after the color information of multiple color spaces, by multiple regions in multiple color spaces
Color information is combined.For example, color information of each region in multiple color spaces is represented by characteristic vector, then will
Multiple characteristic vectors are spliced, to obtain representing the assemblage characteristic vector of target image, or the mathematics for passing through kernel function
Method is combined, and obtains representing the assemblage characteristic vector of target image.
As an alternative embodiment, in one embodiment, feature representation module 204 is specifically used for:
By each region in the information combination of each color dimension of each color space, each face in each region is obtained
The assemblage characteristic vector of the colour space;
The assemblage characteristic vector of each color space in each region is spliced, obtains each region multiple
The assemblage characteristic vector of color space;
Each region is spliced in the assemblage characteristic vector of multiple color spaces, the target image is obtained and exists
The assemblage characteristic vector of multiple color spaces at least two region.
Wherein, feature representation module 204 by each region each color dimension of each color space information combination,
The assemblage characteristic vector of each color space in each region is obtained, as this region of region1, can will represent the area
Domain is spliced in the characteristic vector of the information of each color dimension of each color space, and it is empty to obtain each color in the region
Between characteristic vector:
[r1, r2..., r7, r8, g1, g2..., g7, g8, b1, b2..., b7, b8]
[h1, h2..., h7, h8, s1, s2..., s7, s8, v1, v2..., v7, v8]
[l1, l2..., l15, l16, a1, a2..., a15, a16, b1, b2..., b16]
Then, feature representation module 204 is spliced the assemblage characteristic vector of each color space in each region, i.e.,
The characteristic vector in all colours space in one region is spliced into an assemblage characteristic vector, for this region of region1,
The characteristic vector for the information for representing each color space can be spliced, the combination for obtaining the region in multiple color spaces is special
Sign vector:
[r1..., r8, g1..., g8, b1..., b8, h1..., h8, s1..., s8, v1..., v8, l1..., l8,
a1..., a8, b1..., b8]
=[RGB1HSV1LAB1]
Therefore, each region after target image division can obtain the assemblage characteristic vector in multiple color spaces,
Feature representation module 204 is spliced assemblage characteristic vector of each region in multiple color spaces, is obtained target image and is existed
The assemblage characteristic vector of multiple color spaces at least two regions:
[Region1, Region2, Region3]
=[RGB1HSV1LAB1, RGB2HSV2LAB2, RGB3HSV3LAB3]
Feature extraction to target image is completed by feature representation module 204, has obtained representing the group of target image
Close characteristic vector.
As an alternative embodiment, in one embodiment, feature deriving means provided by the invention also include:
Dimensionality reduction module, for representing that the assemblage characteristic vector of the target image carries out dimension-reduction treatment.
After obtaining representing the assemblage characteristic vector of target image, in order that expression formula is more succinct, it is easy to subsequently use,
The assemblage characteristic vector for representing target image can be subjected to dimension-reduction treatment, to reduce computational complexity during follow-up use, improved
Operation efficiency.
For example, dimension reduction method has principal component analysis (Principal Component Analysis, PCA), PCA can be passed through
To representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
The method of dimensionality reduction is not limited to above-mentioned PCA simultaneously, can also be that opposite feature elimination, number of combinations etc. arbitrarily drop
Dimension method is to representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
As an alternative embodiment, in one embodiment, feature deriving means provided by the invention are extracted
The assemblage characteristic vector of target image can be used for the identification and tracking of image, can be with when for image recognition and tracking
Including:
Calculate the assemblage characteristic vector for representing the target image and sample image in image library characteristic vector it
Between distance to judge the similitude between image;
The similitude highest sample image chosen in described image storehouse with the target image is image recognition result;
The target image is tracked according to described image recognition result.
Wherein it is possible to the expression before the vectorial either dimensionality reduction of the assemblage characteristic of the expression target image after calculating dimensionality reduction
The distance between characteristic vector of sample image in the assemblage characteristic vector and image library of target image.
Preferably, the vectorial spy with sample image in image library of the assemblage characteristic for representing target image after dimensionality reduction is calculated
The distance between vector is levied, can further improve the efficiency of computing, reduces computational complexity.
Wherein, the information of multiple sample images can be included in image library.For example, image library includes multiple sample images
Title, numbering and characteristic vector etc..Such as the characteristic vector of title including party A-subscriber's image, numbering and party A-subscriber's image, pass through A
The title of user images can in image library the unique mark sample, table can be used for by the characteristic vector of party A-subscriber's image
Up to the feature of party A-subscriber's image.
Enter row distance calculate when, can be calculated by range formula, range formula include but is not limited to it is European away from
From with a distance from, Ming Shi, mahalanobis distance.Which specifically used range formula can be selected as needed.Range formula can be from now
Have in technology and obtained, here is omitted.There are the method for different determination similitudes, example for different range formulas
Such as, the distance value obtained according to Euclidean distance is smaller, shows that the similarity between image is bigger.
Calculate represent target image assemblage characteristic vector and image library in sample image characteristic vector between away from
From when, the assemblage characteristic vector and the characteristic vectors of multiple sample images in image library that represent target image can be carried out respectively
Calculate, with acquisition and target image similarity highest image, so as to obtain image recognition result, i.e., whether wrapped in target image
Containing a certain destination object.
For example, M images are carried out into feature extraction, obtain representing the assemblage characteristic vector x of M images, if image library includes A
User images, party B-subscriber's image, C user images, the wherein characteristic vector of party A-subscriber's image are ya, the characteristic vector of party B-subscriber's image
For yb, the characteristic vector of C user images is yc, then characteristic vector x and y are calculated respectively according to Euclidean distancea, characteristic vector x with
yb, characteristic vector x and ycDistance.If characteristic vector x and y is calculatedbDistance it is minimum, identification party B-subscriber's image is and target
Image is most like, and target image includes party B-subscriber.
After obtaining to the recognition result of target image, target image can be tracked according to image recognition result,
Particularly the destination object recognized in target image is tracked.For example, the party B-subscriber in M images is tracked.
When carrying out image trace, existing image tracking algorithm can be selected to be tracked, here is omitted.
Assemblage characteristic vector due to representing target image is the result of feature extraction, can be used to indicate that target image,
Therefore the content that target image includes can be identified and is tracked according to the result of feature extraction.
If the degree of accuracy of the feature extraction to target image is higher, image recognition and the degree of accuracy of image trace can be got over
Height, if the degree of accuracy of feature extraction is not high, image recognition and the degree of accuracy of image trace can also reduce.
Meanwhile in addition to carrying out image recognition and image trace, other figures beyond image recognition can also be carried out
As processing operation, for example, image retrieval, feature extraction for target image it is more accurate, then for subsequently carrying out other figures
The degree of accuracy and efficiency during as processing operation is higher.
In the embodiment of the present invention, acquisition module obtains target image;Region division module is according to people in the target image
The target image is divided at least two regions by the upright direction of thing;Multiple color spaces processing module is extracted each region and existed
The color information of multiple color spaces;Multiple color spaces that feature representation module will be extracted from least two region
Color information combination, obtain representing the assemblage characteristic vector of the target image.By by target image zoning, and
The multiple color spaces extraction color information in each region, is expressed the color of image from multiple dimensions, is fully demonstrated
The feature and distribution of color of image different piece so that no matter the image under which kind of illumination condition can accurately represent to scheme
Picture.Feature extraction is carried out by multizone and multiple color spaces it is achieved thereby that extracting the purpose of the feature of image exactly.
The present invention divides the area of space of image according to the upright direction of personage in target image, in each area, point
The color information of multiple color spaces is indescribably taken, the stereogram of each color space is spliced into one-dimensional, reconnects the area
The histogram in domain different colours space, and the assemblage characteristic of the histogram acquisition multizone multiple color spaces of connection all areas
Vector, so as to overcome the limitation in the Tracking Recognition of destination object, increase the information content of color space characteristic, at the same it is high
Effect characterizes the spatial information of destination object.
Again by the assemblage characteristic vector dimensionality reduction of the multizone multiple color spaces of acquisition, to reduce operand and improve robust
Property, come afterwards further according to the similitude between the assemblage characteristic vector calculating image of the multizone multiple color spaces of acquisition to target
Image is identified and tracked.
Embodiment
The present invention also provides a kind of pattern recognition device, and described image identification device includes:
Acquisition module, for obtaining target image;
Region division module, for being divided into the target image according to the upright direction of personage in the target image
At least two regions;
Multiple color spaces processing module, for extracting color information of each region in multiple color spaces;
Feature representation module, for the color information for the multiple color spaces that will be extracted from least two region
Combination, obtain representing the assemblage characteristic vector of the target image;
Picture recognition module, the assemblage characteristic vector and image after dimensionality reduction for calculating the expression target image
The distance between characteristic vector of sample image to be to judge the similitude between image in storehouse, choose in described image storehouse with it is described
The similitude highest sample image of target image is image recognition result, according to described image recognition result to the target figure
As being tracked.
As a preferred embodiment, apparatus of the present invention also include:
Dimensionality reduction module, for representing that the assemblage characteristic vector of the target image carries out dimension-reduction treatment.
Dimensionality reduction module can calculate the assemblage characteristic vector for representing the target image and sample graph in image library
To representing that the assemblage characteristic vector of the target image carries out dimension-reduction treatment before the distance between characteristic vector of picture.
After obtaining representing the assemblage characteristic vector of target image, in order that expression formula is more succinct, it is easy to subsequently use,
The assemblage characteristic vector for representing target image can be subjected to dimension-reduction treatment, to reduce computational complexity during follow-up use, improved
Operation efficiency.
The method of the dimensionality reduction has principal component analysis (Principal Component Analysis, PCA), can pass through
PCA is to representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
The method of dimensionality reduction is not limited to above-mentioned PCA simultaneously, can also be that opposite feature elimination, number of combinations etc. arbitrarily drop
Dimension method is to representing that the assemblage characteristic vector of target image carries out dimensionality reduction.
Relevant description for pattern recognition device refer to the description of correlation step in features described above extraction element.Herein
Repeat no more.
Embodiment
Fig. 4 is refer to, Fig. 4 is the schematic diagram of electronic equipment 1 provided in an embodiment of the present invention.The electronic equipment 1 includes
Memory 20, processor 30 and it is stored in the program 40 that can be run in the memory 20 and on the processor 30, example
Such as feature extraction program.The processor 30 realizes the step in features described above extracting method embodiment when performing described program 40
Suddenly, step S10~S13 such as shown in Fig. 1;Or the processor 30 realizes said apparatus reality when performing described program 40
Apply the function of each module/unit in example, such as module 201~204.
Exemplary, described program 40 can be divided into one or more module/units, one or more of moulds
Block/unit is stored in the memory 20, and is performed by the processor 30, to complete the present invention.It is one or more
Individual module/unit can be the sequence of program instructions section that can complete specific function, and the instruction segment is used to describe described program
40 implementation procedure in the electronic equipment 1.For example, described program 40 can be divided into acquisition module 201 in Fig. 3,
Region division module 202, multiple color spaces processing module 203 and feature representation module 204, each module concrete function is referring to foregoing
Embodiment.
The electronic equipment 1 can be that desktop PC, notebook computer, palm PC and cloud server etc. calculate
Machine equipment, it can also be embedding assembly equipment, such as video camera etc..It will be understood by those skilled in the art that the schematic diagram 4
The only example of electronic equipment 1, the restriction to electronic equipment 1 is not formed, can included than illustrating more or less portions
Part, some parts or different parts are either combined, such as the electronic equipment 1 can also include input-output equipment, net
Network access device, bus etc..
Alleged processor 30 can be CPU (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital SignalProcessor, DSP), application specific integrated circuit
(Application Specific IntegratedCircuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor 30 can also be any conventional processor
Deng the processor 30 is the control centre of the electronic equipment 1, utilizes various interfaces and the whole electronic equipment 1 of connection
Various pieces.
The memory 20 can be used for storing said program 40 and/or module/unit, the processor 30 by operation or
The computer program and/or module/unit being stored in the memory 20 are performed, and calls and is stored in memory 20
Data, realize the various functions of the electronic equipment 1.The memory 20 can mainly include storing program area and data storage
Area, wherein, storing program area can store operation device, needed at least one function application program (such as sound-playing function,
Image player function etc.) etc.;Storage data field can store uses created data (such as audio number according to electronic equipment 1
According to, video data etc.) etc..In addition, memory 20 can include high-speed random access memory, non-volatile deposit can also be included
Reservoir, such as hard disk, internal memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital
(Secure Digital, SD) block, flash card (Flash Card), at least one disk memory, flush memory device or other
Volatile solid-state part.
If the integrated module/unit of the electronic equipment 1 is realized in the form of SFU software functional unit and as independent
Production marketing in use, can be stored in a computer read/write memory medium.It is real based on such understanding, the present invention
All or part of flow in existing above-described embodiment method, by program the hardware of correlation can also be instructed to complete, it is described
Program can be stored in a computer-readable recording medium, the program when being executed by processor, can be achieved above-mentioned each side
The step of method embodiment.Wherein, described program includes program code, and described program code can be source code form, object generation
Code form, executable file or some intermediate forms etc..The computer-readable medium can include:Described program can be carried
Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disc, CD, computer storage, the read-only storage of code
(ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, electricity
Believe signal and software distribution medium etc..It should be noted that the content that the computer-readable medium includes can be according to department
Legislation and the requirement of patent practice carry out appropriate increase and decrease in method administrative area, such as in some jurisdictions, according to legislation and
Patent practice, computer-readable medium do not include electric carrier signal and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that disclosed method and apparatus, can also pass through
Other modes are realized.For example, device embodiment described above is only schematical, for example, the division of the module,
Only a kind of division of logic function, can there is other dividing mode when actually realizing.
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although reference
The present invention is described in detail for preferred embodiment, it will be understood by those within the art that, can be to the present invention's
Technical scheme is modified or equivalent substitution, without departing from the spirit and scope of technical solution of the present invention.
Claims (11)
1. a kind of feature extracting method, it is characterised in that methods described includes:
Obtain target image;
The target image is divided at least two regions according to the upright direction of personage in the target image;
Extract color information of each region in multiple color spaces;
The color information of the multiple color spaces extracted from least two region is combined, obtains representing the target
The assemblage characteristic vector of image.
2. the method as described in claim 1, it is characterised in that believe in the color of multiple color spaces in each region of extraction
Breath includes:
Each region is extracted in multiple color spaces according to color histogram and the information of each color dimension and uses characteristic vector
It is indicated.
3. method as claimed in claim 2, it is characterised in that it is described will be extracted from least two region it is multiple
The color information combination of color space, obtains representing that the assemblage characteristic vector of the target image includes:
By each region in the information combination of each color dimension of each color space, it is empty to obtain each color in each region
Between assemblage characteristic vector;
The assemblage characteristic vector of each color space in each region is spliced, obtains each region in multiple colors
The assemblage characteristic vector in space;
Each region is spliced in the assemblage characteristic vector of multiple color spaces, obtains the target image described
The assemblage characteristic vector of multiple color spaces at least two regions.
4. the method as described in claim 1, it is characterised in that methods described also includes:
Dimension-reduction treatment is carried out to the assemblage characteristic vector for representing the target image.
5. a kind of image-recognizing method, it is characterised in that methods described includes:
Obtain target image;
The target image is divided at least two regions according to the upright direction of personage in the target image;
Extract color information of each region in multiple color spaces;
The color information of the multiple color spaces extracted from least two region is combined, obtains representing the target
The assemblage characteristic vector of image;
Calculate in the assemblage characteristic vector for representing the target image and image library between the characteristic vector of sample image
Distance is to judge the similitude in target image and described image storehouse between sample image;
The similitude highest sample image chosen in described image storehouse with the target image is image recognition result;
The target image is tracked according to described image recognition result.
6. image-recognizing method as claimed in claim 5, it is characterised in that calculate the expression target image described
Assemblage characteristic vector and sample image in image library the distance between characteristic vector to judge target image and described image
Also include before similitude in storehouse between sample image:
Dimension-reduction treatment is carried out to the assemblage characteristic vector for representing the target image.
7. a kind of feature deriving means, it is characterised in that described device includes:
Acquisition module, for obtaining target image;
Region division module, for being divided into the target image at least according to the upright direction of personage in the target image
Two regions;
Multiple color spaces processing module, for extracting color information of each region in multiple color spaces;
Feature representation module, for the color information group for the multiple color spaces that will be extracted from least two region
Close, obtain representing the assemblage characteristic vector of the target image.
8. feature deriving means as claimed in claim 7, it is characterised in that described device also includes:
Dimensionality reduction module, for carrying out dimension-reduction treatment to the assemblage characteristic vector for representing the target image.
9. a kind of pattern recognition device, it is characterised in that described device includes:
Acquisition module, for obtaining target image;
Region division module, for being divided into the target image at least according to the upright direction of personage in the target image
Two regions;
Multiple color spaces processing module, for extracting color information of each region in multiple color spaces;
Feature representation module, for the color information group for the multiple color spaces that will be extracted from least two region
Close, obtain representing the assemblage characteristic vector of the target image;
Picture recognition module, for calculate the assemblage characteristic vector after the dimensionality reduction for representing the target image with image library
The distance between characteristic vector of sample image to judge the similitude between image, choose in described image storehouse with the target
The similitude highest sample image of image is image recognition result, and the target image is entered according to described image recognition result
Line trace.
10. device as claimed in claim 9, it is characterised in that described device also includes:
Dimensionality reduction module, for representing that the assemblage characteristic vector of the target image carries out dimension-reduction treatment.
11. a kind of electronic equipment, it is characterised in that the electronic equipment includes memory and processor, and the memory is used for
At least one instruction is stored, the processor is realized when being used to perform the program stored in memory as appointed in claim 1-4
One feature extracting method of meaning and/or such as claim 5-6 described image recognition methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710766531.5A CN107506738A (en) | 2017-08-30 | 2017-08-30 | Feature extracting method, image-recognizing method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710766531.5A CN107506738A (en) | 2017-08-30 | 2017-08-30 | Feature extracting method, image-recognizing method, device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107506738A true CN107506738A (en) | 2017-12-22 |
Family
ID=60693080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710766531.5A Pending CN107506738A (en) | 2017-08-30 | 2017-08-30 | Feature extracting method, image-recognizing method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107506738A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256588A (en) * | 2018-02-12 | 2018-07-06 | 兰州工业学院 | A kind of several picture identification feature extracting method and system |
CN108822186A (en) * | 2018-07-06 | 2018-11-16 | 广东石油化工学院 | A kind of molecular biology test extraction mortar and its extracting method |
CN110659541A (en) * | 2018-06-29 | 2020-01-07 | 深圳云天励飞技术有限公司 | Image recognition method, device and storage medium |
CN110674834A (en) * | 2018-07-03 | 2020-01-10 | 百度在线网络技术(北京)有限公司 | Geo-fence identification method, device, equipment and computer-readable storage medium |
CN113158715A (en) * | 2020-11-05 | 2021-07-23 | 西安天伟电子系统工程有限公司 | Ship detection method and device |
CN113223041A (en) * | 2021-06-25 | 2021-08-06 | 上海添音生物科技有限公司 | Method, system and storage medium for automatically extracting target area in image |
CN115050487A (en) * | 2022-06-10 | 2022-09-13 | 奇医天下大数据科技(珠海横琴)有限公司 | Internet medical service management system based on artificial intelligence |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102072885A (en) * | 2010-12-06 | 2011-05-25 | 浙江大学 | Machine vision-based paddy neck blast infection degree grading method |
CN105187785A (en) * | 2015-08-31 | 2015-12-23 | 桂林电子科技大学 | Cross-checkpost pedestrian identification system and method based on dynamic obvious feature selection |
CN105303152A (en) * | 2014-07-15 | 2016-02-03 | 中国人民解放军理工大学 | Human body re-recognition method |
CN105630906A (en) * | 2015-12-21 | 2016-06-01 | 苏州科达科技股份有限公司 | Person searching method, apparatus and system |
CN106023228A (en) * | 2016-06-02 | 2016-10-12 | 公安部物证鉴定中心 | Counterfeit money ultraviolet fluorescent image segmentation method based on support vector machine |
-
2017
- 2017-08-30 CN CN201710766531.5A patent/CN107506738A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102072885A (en) * | 2010-12-06 | 2011-05-25 | 浙江大学 | Machine vision-based paddy neck blast infection degree grading method |
CN105303152A (en) * | 2014-07-15 | 2016-02-03 | 中国人民解放军理工大学 | Human body re-recognition method |
CN105187785A (en) * | 2015-08-31 | 2015-12-23 | 桂林电子科技大学 | Cross-checkpost pedestrian identification system and method based on dynamic obvious feature selection |
CN105630906A (en) * | 2015-12-21 | 2016-06-01 | 苏州科达科技股份有限公司 | Person searching method, apparatus and system |
CN106023228A (en) * | 2016-06-02 | 2016-10-12 | 公安部物证鉴定中心 | Counterfeit money ultraviolet fluorescent image segmentation method based on support vector machine |
Non-Patent Citations (1)
Title |
---|
林晓津: "基于视觉颜色处理机制的运动人体识别算法", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256588A (en) * | 2018-02-12 | 2018-07-06 | 兰州工业学院 | A kind of several picture identification feature extracting method and system |
CN110659541A (en) * | 2018-06-29 | 2020-01-07 | 深圳云天励飞技术有限公司 | Image recognition method, device and storage medium |
CN110674834A (en) * | 2018-07-03 | 2020-01-10 | 百度在线网络技术(北京)有限公司 | Geo-fence identification method, device, equipment and computer-readable storage medium |
CN108822186A (en) * | 2018-07-06 | 2018-11-16 | 广东石油化工学院 | A kind of molecular biology test extraction mortar and its extracting method |
CN113158715A (en) * | 2020-11-05 | 2021-07-23 | 西安天伟电子系统工程有限公司 | Ship detection method and device |
CN113223041A (en) * | 2021-06-25 | 2021-08-06 | 上海添音生物科技有限公司 | Method, system and storage medium for automatically extracting target area in image |
CN113223041B (en) * | 2021-06-25 | 2024-01-12 | 上海添音生物科技有限公司 | Method, system and storage medium for automatically extracting target area in image |
CN115050487A (en) * | 2022-06-10 | 2022-09-13 | 奇医天下大数据科技(珠海横琴)有限公司 | Internet medical service management system based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506738A (en) | Feature extracting method, image-recognizing method, device and electronic equipment | |
WO2017092431A1 (en) | Human hand detection method and device based on skin colour | |
US10372226B2 (en) | Visual language for human computer interfaces | |
US8675960B2 (en) | Detecting skin tone in images | |
Ajmal et al. | A comparison of RGB and HSV colour spaces for visual attention models | |
CN103699532B (en) | Image color retrieval method and system | |
CN109344701A (en) | A kind of dynamic gesture identification method based on Kinect | |
CN108765278A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN106384117B (en) | A kind of vehicle color identification method and device | |
CN107066972B (en) | Natural scene Method for text detection based on multichannel extremal region | |
CN108388905B (en) | A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context | |
US20160092726A1 (en) | Using gestures to train hand detection in ego-centric video | |
CN109271937A (en) | Athletic ground Marker Identity method and system based on image procossing | |
CN107103606A (en) | A kind of image-recognizing method and device | |
CN105844242A (en) | Method for detecting skin color in image | |
CN106096542A (en) | Image/video scene recognition method based on range prediction information | |
CN106951869A (en) | A kind of live body verification method and equipment | |
WO2014036813A1 (en) | Method and device for extracting image features | |
CN109920018A (en) | Black-and-white photograph color recovery method, device and storage medium neural network based | |
CN107886110A (en) | Method for detecting human face, device and electronic equipment | |
CN111161281A (en) | Face region identification method and device and storage medium | |
CN115035581A (en) | Facial expression recognition method, terminal device and storage medium | |
CN113011403B (en) | Gesture recognition method, system, medium and device | |
CN104915641B (en) | The method that facial image light source orientation is obtained based on Android platform | |
US8971619B2 (en) | Method and a device for extracting color features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |