CN104156689B - Method and device for positioning feature information of target object - Google Patents
Method and device for positioning feature information of target object Download PDFInfo
- Publication number
- CN104156689B CN104156689B CN201310177961.5A CN201310177961A CN104156689B CN 104156689 B CN104156689 B CN 104156689B CN 201310177961 A CN201310177961 A CN 201310177961A CN 104156689 B CN104156689 B CN 104156689B
- Authority
- CN
- China
- Prior art keywords
- information
- region
- targeted object
- object region
- shape
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method and device for positioning feature information of a target object. The content includes: dividing an image frame including a target object, determining a target object region, using a feature information classifier to scan the determined target object region, determining central point position information of a region that feature information to be positioned occupies in the target object region, performing affine transformation operation on the determined central point position information, and obtaining initial position information corresponding to the region that the feature information to be positioned occupies in the target object region, thereby improving precision of initial positioning; performing iteration processing on the obtained initial position information, obtaining position information corresponding to the region that the feature information to be positioned occupies in the target object region, integrating the obtained position information corresponding to the region that the feature information to be positioned occupies in the target object region, and obtaining the feature information of the target object, thereby improving precision of positioning the feature information of the target object.
Description
Technical field
The present invention relates to image processing field, more particularly to a kind of method positioned by characteristic information to destination object
And equipment.
Background technology
As the development of science and technology, the awareness of safety of people are continuously increased, deploy in most of public arenas a large amount of
Picture pick-up device, obtain the everything that occurs in public arena in real time using the picture pick-up device of deployment, and accident sent out
The dispute produced after life provides evidence material, while can also reproduce situation when case occurs.But, it is a large amount of due to what is disposed
The pixel of picture pick-up device be not especially high, the characteristic information of the destination object for collecting(For example:The facial characteristics of people)Relatively
It is relatively fuzzyyer, it is impossible to meet the requirement of user.
In the prior art, the mode positioned by the characteristic information of destination object is included but is not limited to:Based on priori
Regular fashion, based on geometry information mode, based on color information mode and based on appearance information mode etc..
Specifically, the priori rules mode is that the experiential description of the general characteristic of the characteristic information using destination object enters
Row positioning, for example:The characteristic informations such as the binocular information in character facial region, face information, its brightness are generally below neighboring area,
But this kind of mode cannot solve how people's visual impression to be expressed as applicable codeization rule, and how to process rule
The problem of the contradiction between accuracy then and pervasive degree;
The geometry information mode is that the geometric characteristic of the characteristic information using destination object is positioned, but
It is that this kind of mode is affected larger by extraneous factor, the accuracy of the characteristic information for obtaining is poor;
As the color information mode is higher to the characteristic requirements of illumination condition and image capture device, cause Yi Shouhuan
The interference of border factor, precision are unstable;
As the appearance information mode is larger using hour operation quantity, it is difficult to join with setting up to intuitively physical features
System, is only suitable for local gray level information and also is difficult to position the characteristic information of destination object.
Additionally, being only limited to carry out pictorial information in prior art to the mode positioned by the characteristic information of destination object
Positioning, carries out just positioning, the characteristic information of the destination object for so obtaining using adaboost algorithms to specified image information
It is less;Reusing ASM algorithms carries out fine positioning to the image information that this is specified, and further the image information that this is specified is carried out
Refinement, as fine positioning is carried out on the basis of just positioning, and the characteristic information for just positioning the destination object of acquisition is less,
Fine positioning error is so caused to increase, the positioning precision of the characteristic information of destination object is reduced, and causes the waste of device resource.
The content of the invention
A kind of method and apparatus positioned by characteristic information to destination object is embodiments provided, for solving
Certainly in prior art, the positioning precision of the characteristic information of destination object is reduced, and causes the problem of the waste of device resource.
A kind of method positioned by characteristic information to destination object, including:
Picture frame comprising destination object is divided, the targeted object region in described image frame is determined;
The targeted object region for determining is scanned using characteristic information grader, is determined in the targeted object region
The center position information in region shared by characteristic information to be positioned;
Center position information to determining carries out affine transformation computing, obtains spy to be positioned in the targeted object region
The corresponding initial position message in region shared by reference breath, and the initial position message to obtaining is iterated process, obtains
The corresponding positional information in region shared by characteristic information to be positioned in the targeted object region;
The corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining is carried out whole
Close, obtain the characteristic information of the destination object.
A kind of equipment positioned by characteristic information to destination object, including:
Division module, for dividing to the picture frame comprising destination object, determines the target pair in described image frame
As region;
Center position information determination module, for being carried out to the targeted object region for determining using characteristic information grader
Scanning, determines the center position information in region shared by characteristic information to be positioned in the targeted object region;
Zone position information determining module, for carrying out affine transformation computing to the center position information for determining, obtains
The corresponding initial position message in region shared by characteristic information to be positioned in the targeted object region, and it is described initial to what is obtained
Positional information is iterated process, obtains the corresponding position letter in region shared by characteristic information to be positioned in the targeted object region
Breath;
Characteristic information integrates module, for by region shared by characteristic information to be positioned in the targeted object region for obtaining
Corresponding positional information is integrated, and obtains the characteristic information of the destination object.
The present invention has the beneficial effect that:
The embodiment of the present invention determines the target in described image frame by dividing to the picture frame comprising destination object
Subject area, is scanned to the targeted object region for determining using characteristic information grader, determines the targeted object region
The center position information in region shared by interior characteristic information to be positioned, the center position information to determining carry out affine transformation fortune
Calculate, obtain the corresponding initial position message in region shared by characteristic information to be positioned in the targeted object region, it is just fixed to improve
The precision of position, is subsequently to be accurately positioned to make place mat;And the initial position message to obtaining is iterated process, obtains
The corresponding positional information in region shared by characteristic information to be positioned in the targeted object region, by the destination object area for obtaining
In domain, the corresponding positional information in region shared by characteristic information to be positioned is integrated, and obtains the characteristic information of the destination object,
Which further increases the accuracy positioned by the characteristic information to destination object.
Description of the drawings
Fig. 1 is that a kind of flow process of the method positioned by characteristic information to destination object of the embodiment of the present invention one is illustrated
Figure;
Fig. 2 is the similar diagram of eyes and eyebrow;
Fig. 3 is the similar diagram opened one's mouth and shut up;
Fig. 4 is that a kind of flow process of the method positioned by characteristic information to destination object of the embodiment of the present invention two is illustrated
Figure;
Fig. 5 is a kind of structural representation of the equipment positioned by characteristic information to destination object of the embodiment of the present invention three
Figure.
Specific embodiment
In order to realize the purpose of the present invention, embodiments providing a kind of characteristic information to destination object carries out determining
The method and apparatus of position, by dividing to the picture frame comprising destination object, determines the destination object in described image frame
Region, is scanned to the targeted object region for determining using characteristic information grader, determines and treat in the targeted object region
The center position information in region shared by location identification, the center position information to determining carry out affine transformation computing,
The corresponding initial position message in region shared by characteristic information to be positioned in the targeted object region is obtained, improves what is just positioned
Precision, is subsequently to be accurately positioned to make place mat;And the initial position message to obtaining is iterated process, obtain described
The corresponding positional information in region shared by characteristic information to be positioned in targeted object region, by the targeted object region for obtaining
The corresponding positional information in region shared by characteristic information to be positioned is integrated, and obtains the characteristic information of the destination object, so
Further increase the accuracy positioned by the characteristic information to destination object.
Each embodiment of the invention is described in detail with reference to Figure of description.
Embodiment one:
As shown in figure 1, for a kind of characteristic information to destination object method for being positioned of the embodiment of the present invention one
Schematic flow sheet, methods described include:
Step 101:Picture frame comprising destination object is divided, the destination object area in described image frame is determined
Domain.
Wherein, a destination object is contained in a targeted object region.
Specifically, in a step 101, the picture frame comprising destination object is divided, including:
First, the picture frame for receiving is calculated using adaboost algorithms, obtains each in described image frame
The area information in region shared by destination object.
Wherein, the size information in region shared by positional information and destination object is included in the area information.
Assume that destination object to be positioned is face information, the picture frame for receiving is video image information, is utilized
Adaboost algorithms are detected to the video frame image for receiving, and obtain the area comprising face in the video image frame information
Domain information:Face Rects={ face Rect1,face Rect2,...face Recti,...,face Rectn, wherein,
face Recti={ xi,yi,widthi,heighiIn { xi,yiBe i-th face area information positional information,
{widthi,heighiBe region shared by i-th face size information.
Secondly, using the positional information in region shared by the destination object and region shared by the destination object size letter
Breath, calculates the center point coordinate information in region shared by the destination object.
Also example is assumed to be with above-mentioned, { the x in region according to shared by i-th destination objecti,yiAnd region shared by destination object
{ widthi,heighi, the center point coordinate information for being calculated region shared by i-th destination object is face Recti's
Center point coordinate information is
3rd, the positioning of the center point coordinate information in region shared by the calculated destination object and storage is special
Shared by the destination object of reference breath, the center point coordinate information in region carries out Euclidean distance calculating, and by calculated distance value
It is compared with the numerical value of setting.
Wherein, the center point coordinate information in region shared by the destination object of the location feature of the storage is according to relative
What the characteristic information for carrying out destination object in the previous frame image frame of the picture frame for receiving was obtained when positioning.
Specifically, also example is assumed to be with above-mentioned, shared by the face of the location feature of storage, the characteristic information in region is
Feature=(x1,y1,x2,y2,...,xm,ym), the central point for being calculated region shared by the destination object of location feature is sat
Mark information is (x ', y '), i.e.,
4th, according to comparative result, it is determined that the targeted object region determined in the picture frame for receiving is no-fix feature
The targeted object region of the information still targeted object region of location identification.
Specifically, when numerical value of the calculated distance value more than setting, it is determined that determine in the picture frame for receiving
The targeted object region is the targeted object region of no-fix characteristic information;
When the numerical value that calculated distance value no more than sets, it is determined that the mesh determined in the picture frame for receiving
Mark subject area is the targeted object region of location identification.
More preferably, it is determined that the targeted object region determined in the picture frame that receives is the target of location identification
After subject area, methods described also includes:
First, the targeted object region of the location identification for determining is zoomed to into N*N sizes, and will be stored
Shared by the destination object of location identification, area zoom is to N*N sizes.
Wherein, N is natural number.
So ensure the mesh of the targeted object region of the location identification of determination and the location identification of storage
Region shared by mark object can carry out follow-up matching operation in the case of same pixel size, improve the precision of matching operation.
Secondly, calculate similarity T between the targeted object region and the targeted object region of storage determined.
Specifically, calculate in the following manner between the targeted object region of the targeted object region and storage determined
Similarity T:
Wherein,P (x, y) and q (x, y) are the destination object area of storage respectively
Domain and the targeted object region determined, ε is constant, and scope is [0,255], and ε values are bigger, the similar numbers s in two regions
It is bigger.
3rd, judge that calculated similarity, whether less than the Similarity value for setting, if so, then assert the mesh determined
The targeted object region of mark subject area and storage is same targeted object region;
Otherwise, assert that the targeted object region determined is the targeted object region for needing to reposition characteristic information, i.e.,
For the targeted object region of location identification.
Wherein, the determination method of the Similarity value of the setting is included but is not limited to:θ*N2, wherein, N2It is image size, θ
It is similarity, in [0,1], when taking 0, then all judgement operations are all unsuccessful, when taking 1, then all to judge operation all for scope
Success.
More preferably, it should be noted that determining that the method for the numerical value of setting is included but is not limited to:
Wherein, W is the window size value of destination object in LK algorithms, and Level is that the window to the destination object contracts
The number of times put.
As can be seen here, if d values are excessive, may be by the characteristic information of different target object and same targeted object region
Realize matching;If d values are too small, the characteristic information of same destination object and the targeted object region may can not be realized
Match somebody with somebody, that is to say, that the size of d values will ensure that unique match is realized in the characteristic information of same destination object and the targeted object region.
Step 102:The targeted object region for determining is scanned using characteristic information grader, determines the target pair
The center position information in region as shared by characteristic information to be positioned in region.
Specifically, in a step 102, first, after the targeted object region for determining no-fix characteristic information, target
The hunting zone of the characteristic information of object is also required to determine, when hunting zone is too small, will appear from can't detect destination object
Situation;When hunting zone is excessive, the amount of calculation of system will be increased and false drop rate is of a relatively high, therefore, the determination of hunting zone is
Obtained based on the mode of statistics, wherein, W and H is the wide and high of region shared by destination object, such as following table respectively(1)It is shown:
Starting point x coordinate | Starting point y-coordinate | Width | Highly | |
Characteristic information 1 | W/8 | H/8 | W/2 | H/2 |
Characteristic information 2 | W/2-W/8 | H/8 | W/2 | H/2 |
Characteristic information 3 | W/4 | H/2 | W/2 | H/2 |
Table(1)
Secondly, for a characteristic information, using characteristic information grader it is determined that hunting zone in search the mesh
The characteristic information of mark object.
Specifically, so that destination object is face as an example, now characteristic information is eyebrow, eyes, nose and face.By big
Amount practical data is learnt, due to when eye sample is intercepted, closing one's eyes similar in a measure to thin eyebrow, as shown in Fig. 2 eye classification device
If eyes are only intercepted as training sample, occur and thin eyebrow be detected as into eyes especially occur eyes during IMAQ unavoidably
The frequency of the situation of closure is higher, therefore, should intercept together with eyebrow when intercepting eye sample, so improve the essence of positioning eyes
Degree, contributes to the process of subsequent algorithm.
Same reason, using face grader when face sample is intercepted, as face is changed greatly, especially opens one's mouth
With shut up, as shown in figure 3, reduce the precision of positioning face, but it is relative with face for, the change of nose is less, therefore,
Face is together intercepted with nose, the accuracy of detection of face is so improved.
Below so that left eye is characteristic information as an example, left eye and region shared by left eyebrow is scanned, left eye, the center of left eyebrow is determined
Point centerPoint, maximum point maxPoint, smallest point minPoint, then during intercepting left eye exactly with centerPoint is
The heart, is (maxPoint-minPoint) that width and height are intercepted, so can ensure that eyebrow and eyes can be intercepted out
Come.
Mode in the same manner intercepts out right eye and right eyebrow.
Face intercepts, the coordinate points information of calculating nose and all face similar to the interception way of eyes, is cut
The central point centerPoint that takes, maximum point maxPoint, smallest point minPoint, then intercept face be exactly with
Centered on centerPoint, (maxPoint-minPoint) for width and height intercepted, so can ensure that face and
Nose can be intercepted out.
To the sample for intercepting, size unification zooms to 20*20, and using most basic Haar features, wherein, feature sum is
78460, left eye grader, right eye grader and face grader is respectively obtained after training.And using the different classifications device for obtaining
It is determined that in the range of scan for, obtain the position of left eye, right eye and face, due to the intercepting of training sample be by eyes and
Eyebrow puts what is intercepted together, it is thus determined that eyes and face center position information, be not eyes and face
Real center position, but the center position information of the center position information of eyes and eyebrow, face and nose.
Step 103:Center position information to determining carries out affine transformation computing, obtains in the targeted object region
The corresponding initial position message in region shared by characteristic information to be positioned, and the initial position message to obtaining is iterated place
Reason, obtains the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region.
Specifically, in step 103, affine transformation is carried out to the center position information for determining using ASM training algorithms
Computing, specifically, obtains the average shape in region shared by characteristic information to be positioned in destination object using ASM training algorithms, and
Affine transformation is carried out further(Including:Rotation, zooming and panning conversion)Calculate, obtain feature to be positioned in the destination object
The initial position message in region shared by information.
And the initial position message to obtaining is iterated process, spy to be positioned in the targeted object region is obtained
The corresponding positional information in region shared by reference breath.
More preferably, the targeted object region of the location identification for determining in step 101, due to characteristic information
In different video image frame information, texture is different, and the characteristic information of identical destination object is in different video image frame information
Shared region is different, is positioned in therefore, it can utilize the adjacent previous video image frame information of location identification
The light flow valuve of each characteristic information, calculates in the image frame information that receives in the targeted object region of location identification
Each characteristic information light flow valuve.
Specifically, it is calculated by following LK optical flow algorithms:
Order Then
Wherein, Ix1And Iy1Represent the picture point in targeted object region in present image frame information(x1, y1)Place x directions
Gradient and the gradient in y directions, ItRepresent present image frame information time orientation gradient, i.e., before and after the corresponding position of picture point
Difference, u and v is the light flow valuve of each characteristic information to be asked, and u represents the optical flow velocity of horizontal direction, and v represents vertical direction
Optical flow velocity.
After the light flow valuve for obtaining each characteristic information, using the mode of ballot, the position of the characteristic information to obtaining
Information is adjusted.
Specifically, due to the light flow valuve that single-point is calculated there are problems that trueness error, morbid state, by calculate
The surrounding pixel of this feature information point can avoid these problems, calculate the light flow valuve of this feature information point and peripheral image vegetarian refreshments
Afterwards, by ballot by way of select poll most angle and length as this feature information positional information.
Step 104:Region shared by characteristic information to be positioned in the targeted object region for obtaining corresponding position is believed
Breath is integrated, and obtains the characteristic information of the destination object.
Specifically, at step 104, by region pair shared by characteristic information to be positioned in the targeted object region for obtaining
The positional information answered is integrated, including:
First, the corresponding position letter in region shared by characteristic information to be positioned in the targeted object region for obtaining
Breath, determines the shape information of the targeted object region.
For example:Shape={ (x1,y1),(x2,y2),......,(xn,yn), wherein, { (x1,y1),(x2,
y2),......,(xn,yn) it is the positional information for obtaining.
Shape={ (x1,y1),(x2,y2),......,(xn,yn) can also be characteristic information original shape information.
Secondly, according to the corresponding relation between the shape information and shape size information of default characteristic information, obtain really
The corresponding shape size information of shape information of the fixed targeted object region.
Specifically, set up right between shape information shape of characteristic information and shape size information S in the following manner
Should be related to:
Wherein, shape={ (x1,y1),(x2,y2) ..., xn(yn, { (x1,y1),(x2,y2),......,(xn,
yn) it is the positional information for obtaining,
It should be noted that the size information of the available original shape information of original shape information.
3rd, judge whether the corresponding shape size information of shape information of the targeted object region for obtaining meets and set
The condition of fixed shape size, if the corresponding shape size information of the shape information of the targeted object region for obtaining meets set
During fixed condition, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining is carried out whole
Close, determine the characteristic information of the destination object;
If the corresponding shape size information of the shape information of the targeted object region for obtaining is unsatisfactory for the shape for setting
During the condition of size, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region that obtains is entered
Row adjustment, and the positional information after continuing executing with according to adjustment determines the operation of the shape information of the targeted object region.
Specifically, the condition of the shape size for setting as:The scope of the shape size information for obtaining is【Original shape
Size information * constraint factor】~【Original shape size information *(2- constraint factors)】.
That is S ∈ [S0*θ,S0* (2- θ)], wherein, constraint factor θ=0.9.
More preferably, if the shape size information for obtaining is unsatisfactory for the condition of the shape size for setting, to the position for obtaining
Information is adjusted, and specifically includes:
First, when the shape size information for obtaining is less than【Original shape size information * constraint factor】When, it is every by what is obtained
One positional information increases【Original shape size information * constraint factor】With the quotient of the shape size information for obtaining.
For example:Work as SiLess than S0* θ, illustrates that the shape of current characteristic information is too small, calls Scale (shapei,S0*θ/Si)
Row constraint is entered to shape size.
Wherein:S=S0*θ/Si。
Secondly, when the shape size information for obtaining is more than【Original shape size information *(2- constraint factors)】When, will obtain
Each positional information increase【Original shape size information *(2- constraint factors)】With the business of the shape size information for obtaining
Value.
For example:If SiMore than S0*(2-θ), illustrate that the shape of current characteristic information is excessive, call Scale (shapei,S0*
(2-θ)/Si) row constraint is entered to shape size.
Wherein:S=S0*(2-θ)/Si。
By the scheme of the embodiment of the present invention one, the picture frame comprising destination object is divided, described image is determined
Targeted object region in frame, is scanned to the targeted object region for determining using characteristic information grader, determines the mesh
The center position information in region shared by characteristic information to be positioned in mark subject area, the center position information to determining are carried out
Affine transformation computing, obtains the corresponding initial position message in region shared by characteristic information to be positioned in the targeted object region,
The precision of just positioning is improve, is subsequently to be accurately positioned to make place mat;And the initial position message to obtaining changes
In generation, is processed, and obtains the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region, by the institute for obtaining
State the corresponding positional information in region shared by characteristic information to be positioned in targeted object region to be integrated, obtain the destination object
Characteristic information, which further increases the accuracy positioned by the characteristic information to destination object.
Embodiment two:
As shown in figure 4, for a kind of characteristic information to destination object method for being positioned of the embodiment of the present invention two
Schematic flow sheet, the embodiment of the present invention two are the localization methods with the embodiment of the present invention one under same design, are specifically included:
Step 201:Picture frame comprising destination object is divided, the destination object area in described image frame is determined
Domain, and to determine targeted object region judge, if the targeted object region is the target pair of location identification
As region, then execution step 202;Otherwise, execution step 206.
Wherein, a destination object is contained in a targeted object region.
Specifically, in step 201, the picture frame comprising destination object is divided, including:
First, the picture frame for receiving is calculated using adaboost algorithms, obtains each mesh in the picture frame
The area information in region shared by mark object.
Wherein, the size information in region shared by positional information and destination object is included in the area information.
Assume that destination object to be positioned is face information, the picture frame for receiving is video frame image, is utilized
Adaboost algorithms are detected to the video frame image for receiving, the region of the face included in obtaining the video frame image
Information:Face Rects={ face Rect1,face Rect2,...face Recti,...,face Rectn, wherein,
face Recti={ xi,yi,widthi,heighiIn { xi,yiBe i-th face area information positional information,
{widthi,heighiBe region shared by i-th face size information.
Secondly, using the positional information in region shared by the destination object and region shared by the destination object size letter
Breath, calculates the center point coordinate information in region shared by the destination object.
Also example is assumed to be with above-mentioned, { the x in region according to shared by i-th destination objecti,yiAnd the destination object shared by
{ the width in regioni,heighi, the center point coordinate information for being calculated region shared by i-th destination object is face
RectiCenter point coordinate information be
3rd, by calculated center point coordinate information and the destination object institute occupied area of the location identification of storage
The center point coordinate information in domain carries out Euclidean distance calculating, and the numerical value by calculated distance value with setting is compared.
Wherein, the center point coordinate information in region shared by the destination object of the location feature of the storage is according to phase
Obtain when the characteristic information that the previous frame image frame of the picture frame for receiving carries out destination object is positioned.
Specifically, also example is assumed to be with above-mentioned, shared by the face of the location feature of storage, the characteristic information in region is
Feature=(x1,y1,x2,y2,...,xm,ym), the central point for being calculated region shared by the destination object of location feature is sat
Mark information is (x ', y '), i.e.,
4th, according to comparative result, it is determined that the targeted object region marked off in the image frame information for receiving is uncertain
The targeted object region of the position characteristic information still targeted object region of location identification.
Specifically, when numerical value of the calculated distance value more than setting, it is determined that drawing in the image frame information for receiving
The targeted object region for separating is the targeted object region of no-fix characteristic information;
When the numerical value that calculated distance value no more than sets, it is determined that mark off in the image frame information for receiving
Targeted object region is the targeted object region of location identification.
Step 202:The targeted object region of the location identification for determining is zoomed to into N*N sizes, and will storage
Location identification destination object shared by area zoom to N*N sizes.
Step 203:Calculate similarity T between the targeted object region and the targeted object region of storage determined.
Specifically, calculate in the following manner between the targeted object region of the targeted object region and storage determined
Similarity T:
Wherein,P (x, y) and q (x, y) are the destination object area of storage respectively
Domain and the targeted object region determined, ε is constant, and scope is [0,255], and ε values are bigger, the similar numbers s in two regions
It is bigger.
Step 204:Judge that calculated similarity, whether less than the Similarity value for setting, is if so, then assert and determined
Targeted object region and storage targeted object region be same targeted object region;Otherwise, assert the target pair determined
As region be need reposition characteristic information targeted object region, continue executing with step 206.
Step 205:Using each feature positioned in the adjacent previous video image frame information of location identification
The light flow valuve of information, each calculated in the image frame information that receives in the targeted object region of location identification are special
The light flow valuve of reference breath.
Specifically, it is calculated by following LK optical flow algorithms:
Order Then
Wherein, Ix1And Iy1Represent the picture point in targeted object region in present image frame information(x1, y1)Place x directions
Gradient and the gradient in y directions, ItRepresent present image frame information time orientation gradient, i.e., before and after the corresponding position of picture point
Difference, u and v is the light flow valuve of each characteristic information to be asked, and u represents the optical flow velocity of horizontal direction, and v represents vertical direction
Optical flow velocity.
More preferably, after the light flow valuve for obtaining each characteristic information, using the mode of ballot, to the characteristic information for obtaining
Positional information be adjusted.
Specifically, due to the light flow valuve that single-point is calculated there are problems that trueness error, morbid state, by calculate
The surrounding pixel of this feature information point can avoid these problems, calculate the light flow valuve of this feature information point and peripheral image vegetarian refreshments
Afterwards, by ballot by way of select poll most angle and length as this feature information positional information.
Step 206:The targeted object region of the no-fix characteristic information is scanned using characteristic information grader,
Determine the center position information in region shared by characteristic information to be positioned in the targeted object region.
Step 207:Center position information to determining carries out affine transformation computing, obtains treating in the targeted object region
The corresponding initial position message in region shared by location identification.
Step 208:The initial position message to obtaining is iterated process, obtains treating in the targeted object region
The corresponding positional information in region shared by location identification.
Step 209:The positional information for obtaining is integrated, the characteristic information of the destination object is obtained.
Embodiment three:
As shown in figure 5, for a kind of characteristic information to destination object equipment for being positioned of the embodiment of the present invention three
Structural representation, the equipment include that division module 11, center position information determination module 12, zone position information determine mould
Block 13 and characteristic information integrate module 14, wherein:
Division module 11, for dividing to the picture frame comprising destination object, determines the target in described image frame
Subject area;
Center position information determination module 12, for being entered to the targeted object region for determining using characteristic information grader
Row scanning, determines the center position information in region shared by characteristic information to be positioned in the targeted object region;
Zone position information determining module 13, for carrying out affine transformation computing to the center position information for determining, obtains
The corresponding initial position message in region shared by characteristic information to be positioned in the targeted object region, and it is described first to what is obtained
Beginning positional information is iterated process, obtains the corresponding position in region shared by characteristic information to be positioned in the targeted object region
Information;
Characteristic information integrates module 14, for by characteristic information institute to be positioned occupied area in the targeted object region for obtaining
The corresponding positional information in domain is integrated, and obtains the characteristic information of the destination object.
Specifically, the division module 11, specifically for being counted to the picture frame for receiving using adaboost algorithms
Calculate, obtain the area information in region shared by each destination object in described image frame, wherein, include in the area information
The size information in region shared by positional information and destination object;
Using the positional information in region shared by the destination object and region shared by the destination object size information, meter
Calculate the center point coordinate information in region shared by the destination object;
By the letter of location feature of the center point coordinate information in region shared by the calculated destination object and storage
The center point coordinate information in region shared by the destination object of breath carries out Euclidean distance calculating, and by calculated distance value with set
Fixed numerical value is compared, wherein, the center point coordinate information in region shared by the destination object of the location feature of the storage
It is to be obtained according to the characteristic information positioning result of previous frame image;
When numerical value of the calculated distance value more than setting, it is determined that the target determined in the picture frame for receiving
Subject area is the targeted object region of no-fix characteristic information;
When the numerical value that calculated distance value no more than sets, it is determined that the mesh determined in the picture frame for receiving
Mark subject area is the targeted object region of location identification.
Specifically, the characteristic information integrates module 14, treats in the targeted object region obtained specifically for basis
The corresponding positional information in region shared by location identification, determines the shape information of the targeted object region;
According to the corresponding relation between the shape information and shape size information of default characteristic information, the institute for determining is obtained
State the corresponding shape size information of shape information of targeted object region;
Whether the corresponding shape size information of shape information of the targeted object region that judgement is obtained meets setting
The condition of shape size, if the corresponding shape size information of the shape information of the targeted object region for obtaining meets setting bar
During part, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining is integrated,
Determine the characteristic information of the destination object;
If the corresponding shape size information of the shape information of the targeted object region for obtaining is unsatisfactory for the shape for setting
During the condition of size, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region that obtains is entered
Row adjustment, and the positional information after continuing executing with according to adjustment determines the operation of the shape information of the targeted object region.
The characteristic information integrates module 14, specifically for setting up the shape information of characteristic information in the following manner
Corresponding relation between shape and shape size information S:
Wherein, shape={ (x1,y1),(x2,y2) ..., xn(yn, { (x1,y1),(x2,y2),......,(xn,
yn) it is the positional information for obtaining,
More preferably, the condition of the shape size for setting as:S∈[S0*θ,S0* (2- θ)], wherein, constraint factor θ=
0.9;
The characteristic information integrates module 14, specifically for being less than S as the S for obtaining0* during θ, by each position for obtaining
Information increases S0*θ/S;
When the S for obtaining is more than S0*, when (2- θ), each positional information for obtaining is increased into S0*(2-θ)/S.
More preferably, the equipment also includes:Region Matching module 15, wherein:
Region Matching module 15, for it is determined that the targeted object region determined in the picture frame that receives is fixed
After the targeted object region of position characteristic information, the targeted object region of the location identification for determining is zoomed to into N*N big
It is little, and by area zoom shared by the destination object of the location identification of storage to N*N sizes;The target that calculating is determined
Similarity T between subject area and the targeted object region of storage;Judge calculated similarity whether less than setting
Similarity value, if so, then assert that the targeted object region determined and the targeted object region for storing are same destination object areas
Domain;
Otherwise, assert that the targeted object region determined is the targeted object region for needing to reposition characteristic information, its
In, N is natural number.
It will be understood by those skilled in the art that embodiments of the invention can be provided as method, device(Equipment), or computer
Program product.Therefore, the present invention can be using complete hardware embodiment, complete software embodiment or with reference in terms of software and hardware
Embodiment form.And, the present invention can be using the meter for wherein including computer usable program code at one or more
Calculation machine usable storage medium(Including but not limited to magnetic disc store, CD-ROM, optical memory etc.)The computer journey of upper enforcement
The form of sequence product.
The present invention is with reference to method according to embodiments of the present invention, device(Equipment)With the flow chart of computer program
And/or block diagram is describing.It should be understood that can be by each flow process in computer program instructions flowchart and/or block diagram
And/or the combination of square frame and flow chart and/or the flow process in block diagram and/or square frame.These computer programs can be provided to refer to
The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is made to produce
One machine so that produced for realizing by the instruction of computer or the computing device of other programmable data processing devices
The device of the function of specifying in one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or multiple square frames.
These computer program instructions may be alternatively stored in and can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory is produced to be included referring to
Make the manufacture of device, the command device realize in one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or
The function of specifying in multiple square frames.
These computer program instructions can be also loaded in computer or other programmable data processing devices so that in meter
Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented process, so as in computer or
The instruction performed on other programmable devices is provided for realizing in one flow process of flow chart or multiple flow processs and/or block diagram one
The step of function of specifying in individual square frame or multiple square frames.
, but those skilled in the art once know basic creation although preferred embodiments of the present invention have been described
Property concept, then can make other change and modification to these embodiments.So, claims are intended to be construed to include excellent
Select embodiment and fall into the had altered of the scope of the invention and change.
Obviously, those skilled in the art can carry out the essence of various changes and modification without deviating from the present invention to the present invention
God and scope.So, if these modifications of the present invention and modification belong to the scope of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to comprising these changes and modification.
Claims (11)
1. a kind of method positioned by characteristic information to destination object, it is characterised in that include:
Picture frame comprising destination object is divided, the targeted object region in described image frame is determined;
The targeted object region for determining is scanned using characteristic information grader, is determined undetermined in the targeted object region
The center position information in region shared by the characteristic information of position;
Center position information to determining carries out affine transformation computing, obtains feature letter to be positioned in the targeted object region
The corresponding initial position message in region shared by breath, and the initial position message to obtaining is iterated process, obtains described
The corresponding positional information in region shared by characteristic information to be positioned in targeted object region;
The corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining is integrated, is obtained
To the characteristic information of the destination object.
2. the method for claim 1, it is characterised in that the picture frame comprising destination object is divided, mesh is determined
Mark subject area, specifically includes:
The picture frame for receiving is calculated using adaboost algorithms, obtain each destination object institute in described image frame
The area information in occupied area domain, wherein, includes the size letter in region shared by positional information and destination object in the area information
Breath;
Using the positional information in region shared by the destination object and region shared by the destination object size information, calculate institute
State the center point coordinate information in region shared by destination object;
By the location identification of the center point coordinate information in region shared by the calculated destination object and storage
The center point coordinate information in region shared by destination object carries out Euclidean distance calculating, and by calculated distance value and setting
Numerical value is compared, wherein, the center point coordinate information in region shared by the destination object of the location identification of the storage
It is to be obtained according to the characteristic information positioning result of previous frame image;
When numerical value of the calculated distance value more than setting, it is determined that the destination object determined in the picture frame for receiving
Region is the targeted object region of no-fix characteristic information;
When the numerical value that calculated distance value no more than sets, it is determined that the target determined in the picture frame for receiving
Subject area is the targeted object region of location identification.
3. the method for claim 1, it is characterised in that by feature letter to be positioned in the targeted object region for obtaining
Shared by breath, the corresponding positional information in region is integrated, and before obtaining the characteristic information of the destination object, is also included:
The corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining, it is determined that described
The shape information of targeted object region;
According to the corresponding relation between the shape information and shape size information of default targeted object region, the institute for determining is obtained
State the corresponding shape size information of shape information of targeted object region;
Whether the corresponding shape size information of shape information of the targeted object region that judgement is obtained meets the shape of setting
The condition of size, if the corresponding shape size information of the shape information of the targeted object region for obtaining meets impose a condition
When, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining is integrated, really
Make the characteristic information of the destination object;
If the corresponding shape size information of the shape information of the targeted object region for obtaining is unsatisfactory for the shape size for setting
Condition when, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region that obtains is adjusted
It is whole, and the positional information after continuing executing with according to adjustment determines the operation of the shape information of the targeted object region.
4. method as claimed in claim 3, it is characterised in that the shape information of default targeted object region and shape size
Corresponding relation between information, specifically includes:
The corresponding relation set up between shape information shape of targeted object region and shape size information S in the following manner:
Wherein, shape={ (x1, y1), (x2, y2) ..., (xn, yn )},{(x1, y1), (x2,
y2) ..., (xn, yn) it is the positional information for obtaining,
5. method as claimed in claim 4, it is characterised in that the condition of the shape size for setting as:S∈[S0* θ,
S0* (2- θ)], wherein, constraint factor θ=0.9;S0Represent original shape size information;
If the corresponding shape size information of the shape information of the targeted object region for obtaining is unsatisfactory for the shape size for setting
Condition when, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region that obtains is adjusted
It is whole, specifically include:
When the S for obtaining is less than S0*, during θ, each positional information for obtaining is multiplied by into S0*θ/S;
When the S for obtaining is more than S0*, when (2- θ), each positional information for obtaining is multiplied by into S0*(2-θ)/S。
6. method as claimed in claim 2, it is characterised in that it is determined that the target pair determined in the picture frame that receives
As region is that methods described also includes after the targeted object region of location identification:
The targeted object region of the location identification for determining is zoomed to into N*N sizes, and the location feature that will be stored
To N*N sizes, wherein, N is natural number to area zoom shared by the destination object of information;
Calculate similarity T between the targeted object region and the targeted object region of storage determined;
Judge that calculated similarity, whether more than the Similarity value for setting, if so, then assert the destination object area for determining
The targeted object region of domain and storage is same targeted object region;
Otherwise, assert that the targeted object region determined is the targeted object region for needing to reposition characteristic information.
7. the equipment positioned by a kind of characteristic information to destination object, it is characterised in that include:
Division module, for dividing to the picture frame comprising destination object, determines the destination object area in described image frame
Domain;
Center position information determination module, for being swept to the targeted object region for determining using characteristic information grader
Retouch, determine the center position information in region shared by characteristic information to be positioned in the targeted object region;
Zone position information determining module, for carrying out affine transformation computing to the center position information for determining, obtains described
The corresponding initial position message in region shared by characteristic information to be positioned in targeted object region, and the initial position to obtaining
Information is iterated process, obtains the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region;
Characteristic information integrates module, for corresponding in region shared by characteristic information to be positioned in the targeted object region for obtaining
Positional information integrated, obtain the characteristic information of the destination object.
8. equipment as claimed in claim 7, it is characterised in that
The division module, specifically for calculating to the picture frame for receiving using adaboost algorithms, obtains the figure
The area information in region as shared by each destination object in frame, wherein, includes positional information and mesh in the area information
The size information in region shared by mark object;
Using the positional information in region shared by the destination object and region shared by the destination object size information, calculate institute
State the center point coordinate information in region shared by destination object;
By the location identification of the center point coordinate information in region shared by the calculated destination object and storage
The center point coordinate information in region shared by destination object carries out Euclidean distance calculating, and by calculated distance value and setting
Numerical value is compared, wherein, the center point coordinate information in region shared by the destination object of the location identification of the storage
It is to be obtained according to the characteristic information positioning result of previous frame image;
When numerical value of the calculated distance value more than setting, it is determined that the destination object determined in the picture frame for receiving
Region is the targeted object region of no-fix characteristic information;
When the numerical value that calculated distance value no more than sets, it is determined that the target pair determined in the picture frame for receiving
As region is the targeted object region of location identification.
9. equipment as claimed in claim 7, it is characterised in that
The characteristic information integrates module, specifically for according to characteristic information institute to be positioned in the targeted object region for obtaining
The corresponding positional information in occupied area domain, determines the shape information of the targeted object region;
According to the corresponding relation between the shape information and shape size information of default targeted object region, the institute for determining is obtained
State the corresponding shape size information of shape information of targeted object region;
Whether the corresponding shape size information of shape information of the targeted object region that judgement is obtained meets the shape of setting
The condition of size, if the corresponding shape size information of the shape information of the targeted object region for obtaining meets impose a condition
When, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region for obtaining is integrated, really
Make the characteristic information of the destination object;
If the corresponding shape size information of the shape information of the targeted object region for obtaining is unsatisfactory for the shape size for setting
Condition when, the corresponding positional information in region shared by characteristic information to be positioned in the targeted object region that obtains is adjusted
It is whole, and the positional information after continuing executing with according to adjustment determines the operation of the shape information of the targeted object region.
10. equipment as claimed in claim 9, it is characterised in that
The characteristic information integrates module, specifically for setting up shape information shape of targeted object region in the following manner
And the corresponding relation between shape size information S:
Wherein, shape={ (x1, y1), (x2, y2) ..., (xn, yn )},{(x1, y1), (x2,
y2) ..., (xn, yn) it is the positional information for obtaining,
11. equipment as claimed in claim 10, it is characterised in that the condition of the shape size for setting as:S∈[S0* θ,
S0* (2- θ)], wherein, constraint factor θ=0.9;S0Represent original shape size information;
The characteristic information integrates module, specifically for being less than S as the S for obtaining0*, during θ, each positional information for obtaining is taken advantage of
With S0*θ/S;
When the S for obtaining is more than S0*, when (2- θ), each positional information for obtaining is multiplied by into S0*(2-θ)/S。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310177961.5A CN104156689B (en) | 2013-05-13 | 2013-05-13 | Method and device for positioning feature information of target object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310177961.5A CN104156689B (en) | 2013-05-13 | 2013-05-13 | Method and device for positioning feature information of target object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104156689A CN104156689A (en) | 2014-11-19 |
CN104156689B true CN104156689B (en) | 2017-03-22 |
Family
ID=51882186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310177961.5A Active CN104156689B (en) | 2013-05-13 | 2013-05-13 | Method and device for positioning feature information of target object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104156689B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105098651A (en) * | 2014-12-26 | 2015-11-25 | 天津航天中为数据系统科技有限公司 | Power transmission line insulator positioning method and system |
US10398402B2 (en) * | 2015-02-23 | 2019-09-03 | Siemens Healthcare Gmbh | Method and system for automated positioning of a medical diagnostic device |
KR20180055070A (en) * | 2016-11-16 | 2018-05-25 | 삼성전자주식회사 | Method and device to perform to train and recognize material |
CN110110110A (en) * | 2018-02-02 | 2019-08-09 | 杭州海康威视数字技术股份有限公司 | One kind is to scheme to search drawing method, device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236599A (en) * | 2007-12-29 | 2008-08-06 | 浙江工业大学 | Human face recognition detection device based on multi- video camera information integration |
CN102298778A (en) * | 2003-10-30 | 2011-12-28 | 日本电气株式会社 | Estimation system, estimation method, and estimation program for estimating object state |
CN102855629A (en) * | 2012-08-21 | 2013-01-02 | 西华大学 | Method and device for positioning target object |
-
2013
- 2013-05-13 CN CN201310177961.5A patent/CN104156689B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102298778A (en) * | 2003-10-30 | 2011-12-28 | 日本电气株式会社 | Estimation system, estimation method, and estimation program for estimating object state |
CN101236599A (en) * | 2007-12-29 | 2008-08-06 | 浙江工业大学 | Human face recognition detection device based on multi- video camera information integration |
CN102855629A (en) * | 2012-08-21 | 2013-01-02 | 西华大学 | Method and device for positioning target object |
Also Published As
Publication number | Publication date |
---|---|
CN104156689A (en) | 2014-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10600207B2 (en) | Posture state estimation apparatus and posture state estimation method | |
CN103914676B (en) | A kind of method and apparatus used in recognition of face | |
CN102375970B (en) | A kind of identity identifying method based on face and authenticate device | |
Huijuan et al. | Fast image matching based-on improved SURF algorithm | |
US8750573B2 (en) | Hand gesture detection | |
US11804071B2 (en) | Method for selecting images in video of faces in the wild | |
CN103810490B (en) | A kind of method and apparatus for the attribute for determining facial image | |
CN105205480B (en) | Human-eye positioning method and system in a kind of complex scene | |
US8577099B2 (en) | Method, apparatus, and program for detecting facial characteristic points | |
CN101930543B (en) | Method for adjusting eye image in self-photographed video | |
CN103020992B (en) | A kind of video image conspicuousness detection method based on motion color-associations | |
EP1271394A2 (en) | Method for automatically locating eyes in an image | |
US20120027252A1 (en) | Hand gesture detection | |
CN103473564B (en) | A kind of obverse face detection method based on sensitizing range | |
WO2016084072A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
CN105608441B (en) | Vehicle type recognition method and system | |
CN104050448B (en) | A kind of human eye positioning, human eye area localization method and device | |
CN101980242A (en) | Human face discrimination method and system and public safety system | |
CN104156689B (en) | Method and device for positioning feature information of target object | |
CN110472460A (en) | Face image processing process and device | |
WO2018170695A1 (en) | Method and device for recognizing descriptive attributes of appearance feature | |
CN111860400A (en) | Face enhancement recognition method, device, equipment and storage medium | |
CN109389105A (en) | A kind of iris detection and viewpoint classification method based on multitask | |
CN107153806B (en) | Face detection method and device | |
CN105809085B (en) | Human-eye positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |