CN104156689A - Method and device for positioning feature information of target object - Google Patents

Method and device for positioning feature information of target object Download PDF

Info

Publication number
CN104156689A
CN104156689A CN201310177961.5A CN201310177961A CN104156689A CN 104156689 A CN104156689 A CN 104156689A CN 201310177961 A CN201310177961 A CN 201310177961A CN 104156689 A CN104156689 A CN 104156689A
Authority
CN
China
Prior art keywords
destination object
information
region
object region
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310177961.5A
Other languages
Chinese (zh)
Other versions
CN104156689B (en
Inventor
王刚
汪海洋
周祥明
潘石柱
张兴明
傅利泉
朱江明
吴军
吴坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201310177961.5A priority Critical patent/CN104156689B/en
Publication of CN104156689A publication Critical patent/CN104156689A/en
Application granted granted Critical
Publication of CN104156689B publication Critical patent/CN104156689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method and device for positioning feature information of a target object. The content includes: dividing an image frame including a target object, determining a target object region, using a feature information classifier to scan the determined target object region, determining central point position information of a region that feature information to be positioned occupies in the target object region, performing affine transformation operation on the determined central point position information, and obtaining initial position information corresponding to the region that the feature information to be positioned occupies in the target object region, thereby improving precision of initial positioning; performing iteration processing on the obtained initial position information, obtaining position information corresponding to the region that the feature information to be positioned occupies in the target object region, integrating the obtained position information corresponding to the region that the feature information to be positioned occupies in the target object region, and obtaining the feature information of the target object, thereby improving precision of positioning the feature information of the target object.

Description

A kind of method and apparatus that the characteristic information of destination object is positioned
Technical field
The present invention relates to image processing field, relate in particular to a kind of method and apparatus that the characteristic information of destination object is positioned.
Background technology
Along with scientific and technical development, people's awareness of safety constantly increases, in most of public arenas, disposed a large amount of picture pick-up devices, utilize the everything occurring in the picture pick-up device Real-time Obtaining public arena of disposing, and the dispute producing after accident the is occurred material of producing evidence, the situation in the time of also can reproducing case simultaneously and occur.But, because the pixel of a large amount of picture pick-up device of disposing is not high especially, and the characteristic information of the destination object collecting (for example: people's facial characteristics) relatively fuzzy, cannot meet user's requirement.
In the prior art, the mode characteristic information of destination object being positioned includes but not limited to: based on priori rules mode, based on geometry information mode, based on color information mode and based on appearance information mode etc.
Particularly, described priori rules mode is to utilize the experiential description of general characteristic of the characteristic information of destination object to position, such as: the characteristic informations such as the binocular information in character facial region, face information, its brightness is generally lower than neighboring area, but this kind of mode cannot solve how people's visual impression is expressed as to applicable encode rule, and the how degree of accuracy of processing rule and the problem of the contradiction between pervasive degree;
Described geometry information mode is to utilize the geometric characteristic of the characteristic information of destination object to position, and larger but this kind of mode affected by extraneous factor, the accuracy of the characteristic information obtaining is poor;
Because described color information mode is higher to the characteristic requirements of illumination condition and image capture device, cause being subject to the interference of environmental factor, precision is unstable;
Because described appearance information mode operand when applying is larger, be difficult to and physical features foundation is intuitively contacted, only applicable local gray level information is also difficult to the characteristic information of destination object to position.
In addition, the mode in prior art, the characteristic information of destination object being positioned is only limited to pictorial information is positioned, and utilizes adaboost algorithm to carry out just location to the image information of appointment, and the characteristic information of the destination object obtaining is like this less; Again utilize ASM algorithm carefully to locate the image information of this appointment, further the image information of this appointment is carried out to refinement, because thin location is to carry out on the basis of just locating, and the characteristic information of the destination object that just location obtains is less, cause so thin positioning error to increase, the positioning precision of the characteristic information of destination object reduces, and causes the waste of device resource.
Summary of the invention
The embodiment of the present invention provides a kind of method and apparatus that the characteristic information of destination object is positioned, and for solving the positioning precision of the characteristic information of prior art destination object, reduces, and causes the problem of the waste of device resource.
The method that the characteristic information of destination object is positioned, comprising:
The picture frame that comprises destination object is divided, determined the destination object region in described picture frame;
Utilize characteristic information sorter to scan definite destination object region, determine the center position information in the shared region of characteristic information to be positioned in described destination object region;
Definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region;
Positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtained the characteristic information of described destination object.
The equipment that the characteristic information of destination object is positioned, comprising:
Divide module, for the picture frame that comprises destination object is divided, determine the destination object region in described picture frame;
Center position information determination module, for utilizing characteristic information sorter to scan definite destination object region, determines the center position information in the shared region of characteristic information to be positioned in described destination object region;
Zone position information determination module, for definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region;
Characteristic information integrate module, for positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtains the characteristic information of described destination object.
Beneficial effect of the present invention is as follows:
The embodiment of the present invention is by dividing the picture frame that comprises destination object, determine the destination object region in described picture frame, utilize characteristic information sorter to scan definite destination object region, determine the center position information in the shared region of characteristic information to be positioned in described destination object region, definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, improved the precision of first location, for subsequent fine, determined position and make place mat; And the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtain the characteristic information of described destination object, the further like this degree of accuracy that the characteristic information of destination object is positioned that improved.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of a kind of method that the characteristic information of destination object is positioned of the embodiment of the present invention one;
Fig. 2 is the similar diagram of eyes and eyebrow;
Fig. 3 is the similar diagram of opening one's mouth and shutting up;
Fig. 4 is the schematic flow sheet of a kind of method that the characteristic information of destination object is positioned of the embodiment of the present invention two;
Fig. 5 is the structural representation of a kind of equipment that the characteristic information of destination object is positioned of the embodiment of the present invention three.
Embodiment
In order to realize object of the present invention, the embodiment of the present invention provides a kind of method and apparatus that the characteristic information of destination object is positioned, by the picture frame that comprises destination object is divided, determine the destination object region in described picture frame, utilize characteristic information sorter to scan definite destination object region, determine the center position information in the shared region of characteristic information to be positioned in described destination object region, definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, improved the precision of first location, for subsequent fine, determine position and make place mat, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtain the characteristic information of described destination object, the further like this degree of accuracy that the characteristic information of destination object is positioned that improved.
Below in conjunction with Figure of description, each embodiment of the present invention is described in detail.
Embodiment mono-:
As shown in Figure 1, be the schematic flow sheet of a kind of method that the characteristic information of destination object is positioned of the embodiment of the present invention one, described method comprises:
Step 101: the picture frame that comprises destination object is divided, determined the destination object region in described picture frame.
Wherein, in a destination object region, comprised a destination object.
Particularly, in step 101, the picture frame that comprises destination object is divided, being comprised:
First, utilize adaboost algorithm to calculate the picture frame receiving, obtain the area information in each shared region of destination object in described picture frame.
Wherein, described area information has comprised the size information in positional information and the shared region of destination object.
Suppose destination object behaviour face information to be positioned, the picture frame receiving is video image information, utilize adaboost algorithm to detect the video frame image receiving, obtain the area information that comprises people's face in described video image frame information: face Rects={face Rect 1, face Rect 2... face Rect i..., face Rect n, wherein, face Rect i={ x i, y i, width i, heigh iin { x i, y ibe the positional information of the area information of i people's face, { width i, heigh iit is the size information in i the shared region of people's face.
Secondly, utilize the positional information in the shared region of described destination object and the size information in the shared region of described destination object, calculate the center point coordinate information in the shared region of described destination object.
Also with the above-mentioned example that is assumed to be, according to { the x in i the shared region of destination object i, y iand { the width in the shared region of destination object i, heigh i, the center point coordinate information that calculates i the shared region of destination object is face Rect icenter point coordinate information be
The 3rd, the center point coordinate information in the shared region of destination object of the information of location feature of the center point coordinate information in the shared region of described destination object calculating and storage is carried out to Euclidean distance calculating, and the numerical value of the distance value calculating and setting is compared.
Wherein, the center point coordinate information in the shared region of destination object of the location feature of described storage be according to respect to the former frame picture frame of the picture frame receiving, carry out destination object characteristic information location time obtain.
Particularly, also with the above-mentioned example that is assumed to be, the characteristic information in the shared region of people's face of the location feature of storage is feature=(x 1, y 1, x 2, y 2..., x m, y m), the center point coordinate information that calculates the shared region of destination object of location feature is (x ', y '),
The 4th, according to comparative result, determine in the picture frame receive that definite destination object region is the destination object region of no-fix characteristic information or the destination object region of location feature information.
Particularly, when the distance value calculating is greater than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of no-fix characteristic information;
When the distance value calculating is not more than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of location feature information.
More preferably, after in determining the picture frame receiving, definite destination object region is the destination object region of location feature information, described method also comprises:
First, the destination object area zoom of the information of location feature of determining is big or small to N*N, and the shared area zoom of destination object of the information of location feature of storage is big or small to N*N.
Wherein, N is natural number.
Follow-up matching operation can be carried out in the destination object region of the information of location feature that assurance is determined like this and the shared region of destination object of the information of location feature of storage in same pixel size situation, improves the precision of matching operation.
Secondly, the similarity T between the destination object region that calculative determination goes out and the destination object region of storage.
Particularly, the destination object region that calculative determination goes out in the following manner and the similarity T between the destination object region of storage:
T = Σ i = 1 N Σ j = 1 N f ( x i , y j ) ;
Wherein, p (x, y) and q (x, y) are respectively the destination object region of storage and the destination object region of determining, and ε is constant, and scope is [0,255], and ε value is larger, and the similar number s in two regions is larger.
The 3rd, whether the similarity that judgement calculates is less than the similarity value of setting, and the destination object region that if so, identification is determined and the destination object region of storage are same destination object regions;
Otherwise, assert that the destination object region of determining is the destination object region that need to reorientate characteristic information, be the destination object region of location feature information.
Wherein, definite method of the similarity value of described setting includes but not limited to: θ * N 2, wherein, N 2be image size, θ is similarity, and scope is in [0,1], and when getting 0, all decision operation are all unsuccessful, when getting 1, and all successes of all decision operation.
More preferably, it should be noted that, determine that the method for the numerical value of setting includes but not limited to:
d = W 2 * 2 Level - 1 ;
Wherein, W is the window size value of destination object in LK algorithm, and Level is for carrying out the number of times of convergent-divergent to the window of this destination object.
As can be seen here, if d value is excessive, the characteristic information of different target object may be realized and being mated with same destination object region; If d value is too small, coupling can not may be realized in the characteristic information of same destination object and this destination object region, the size that is to say d value will guarantee that the characteristic information of same destination object realizes unique coupling with this destination object region.
Step 102: utilize characteristic information sorter to scan definite destination object region, determine the center position information in the shared region of characteristic information to be positioned in described destination object region.
Particularly, in step 102, first, after determining the destination object region of no-fix characteristic information, the hunting zone of the characteristic information of destination object also needs to determine, when hunting zone is too small, will occur can't detect the situation of destination object; Excessive when hunting zone, the calculated amount and the false drop rate that increase system is relatively high, and therefore, hunting zone definite is that the mode based on statistics obtains, and wherein, W and H are respectively the wide and high of the shared region of destination object, as shown in following table (1):
? Starting point x coordinate Starting point y coordinate Width Highly
Characteristic information 1 W/8 H/8 W/2 H/2
Characteristic information 2 W/2-W/8 H/8 W/2 H/2
Characteristic information 3 W/4 H/2 W/2 H/2
Table (1)
Secondly, for a characteristic information, utilize characteristic information sorter in definite hunting zone, to search the characteristic information of described destination object.
Particularly, the destination object of take is that people's face is example, and now characteristic information is eyebrow, eyes, nose and face.By a large amount of practical data, learn, due to when intercepting eye sample, close one's eyes similar in a measure to thin eyebrow, as shown in Figure 2, if eyes sorter only intercepts eyes as training sample, there will be unavoidably thin eyebrow is detected as eyes, especially the frequency of situation that occurs eyes closed during image acquisition is higher, therefore, during intercepting eye sample, should intercept together with eyebrow, improve like this precision of location eyes, contribute to the processing of subsequent algorithm.
Same reason, utilize face sorter when intercepting face sample, because face changes greatly, especially open one's mouth and shut up, as shown in Figure 3, reduced the precision of location face, but relative and face, the variation of nose is less, therefore, face and nose are together intercepted, improve like this accuracy of detection of face.
The left eye of take is below that characteristic information is example, scanning left eye and the shared region of left eyebrow, determine central point centerPoint, maximum point maxPoint, the smallest point minPoint of left eye, left eyebrow, intercepting left eye is exactly centered by centerPoint, (maxPoint-minPoint) be width and highly intercepting, can guarantee that like this eyebrow and eyes can be intercepted out.
Mode in like manner intercepts out right eye and right eyebrow.
Face intercepting is similar to the interception way of eyes, calculate the coordinate points information of nose and all faces, obtain central point centerPoint, maximum point maxPoint, the smallest point minPoint of intercepting, intercepting face is exactly centered by centerPoint, (maxPoint-minPoint) be width and highly intercepting, can guarantee that like this face and nose can be intercepted out.
Sample to intercepting, size is unified zooms to 20*20, adopts the most basic Haar feature, and wherein, feature adds up to 78460, obtains respectively left eye sorter, right eye sorter and face sorter after training.And utilize the different sorters that obtain to search in definite scope, obtain the position of left eye, right eye and face, because the intercepting of training sample is put eyes and eyebrow to intercept together, so definite eyes and the center position information of face, it is not the real center position of eyes and face, but the center position information of eyes and eyebrow, the center position information of face and nose.
Step 103: definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region.
Particularly, in step 103, utilize ASM training algorithm to carry out affined transformation computing to definite center position information, particularly, utilize ASM training algorithm to obtain the average shape in the shared region of characteristic information to be positioned in destination object, and carry out further affined transformation (comprising: rotation, zooming and panning conversion) and calculate, obtain the initial position message in the shared region of characteristic information to be positioned in this destination object.
And the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region.
More preferably, destination object region for the information of location feature of determining in step 101, because characteristic information texture in different video image frame information is different, the characteristic information of identical destination object shared region in different video image frame information is different, therefore, can utilize the light stream value of each characteristic information of locating in the adjacent last video image frame information of location feature information, calculate in the image frame information receiving the light stream value of each characteristic information in the destination object region of location feature information.
Particularly, by following LK optical flow algorithm, calculate:
I x 1 I y 1 I x 2 I y 2 · · · · · · u v = - I t 1 I t 2 · · · , Order A = I x 1 I y 1 I x 2 I y 2 · · · · · · , b → = - I t 1 I t 2 · · · , ? u v = ( A T A ) - 1 A T b → ,
Wherein, I x1and I y1be illustrated in present image frame information picture point (x in destination object region 1, y 1) locate the gradient of x direction and the gradient of y direction, I tthe gradient that represents the time orientation of present image frame information, before and after position corresponding to picture point poor, u and v are the light stream values of each characteristic information to be asked, u represents the light stream speed of horizontal direction, v represents the light stream speed of vertical direction.
After obtaining the light stream value of each characteristic information, utilize the mode of ballot, the positional information of the characteristic information obtaining is adjusted.
Particularly, there is the problems such as trueness error, morbid state, stability in the light stream value calculating due to single-point, by calculating the surrounding pixel of this characteristic information point, can avoid these problems, calculate after the light stream value of this characteristic information point and neighboring pixel point, by the mode of ballot, select angle that poll is maximum and length as the positional information of this characteristic information.
Step 104: positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtained the characteristic information of described destination object.
Particularly, in step 104, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, being comprised:
First, according to positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining, determine the shape information in described destination object region.
For example: shape={ (x 1, y 1), (x 2, y 2) ..., (x n, y n), wherein, { (x 1, y 1), (x 2, y 2) ..., (x n, y n) be the positional information obtaining.
Shape={ (x 1, y 1), (x 2, y 2) ..., (x n, y n) can be also the original shape information of characteristic information.
Secondly, according to the corresponding relation between the shape information of default characteristic information and shape size information, the shape size information corresponding to shape information in the described destination object region that obtains determining.
Particularly, set up in the following manner the shape information shape of characteristic information and the corresponding relation between shape size information S:
S ( shape ) = Σ i = 1 n ( ( x i - x ′ ) 2 + ( y i - y ′ ) 2 ;
Wherein, shape={ (x 1, y 1), (x 2, y 2) ..., x n(y n, { (x 1, y 1), (x 2, y 2) ..., (x n, y n) be the positional information obtaining, x ′ = Σ i = 1 n x i n , y ′ = Σ i = 1 n y i n .
It should be noted that the size information of the available original shape information of original shape information.
The 3rd, whether the shape size information corresponding to shape information in the described destination object region that judgement obtains meets the condition of the shape size of setting, if shape size information corresponding to the shape information in the described destination object region obtaining meets while imposing a condition, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, determined the characteristic information of described destination object;
When if shape size information corresponding to the shape information in the described destination object region obtaining does not meet the condition of the shape size of setting, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining adjusted, and continued to carry out the operation of determining the shape information in described destination object region according to the positional information after adjusting.
Particularly, the condition of the shape size of described setting is: the scope of the shape size information obtaining is [original shape size information * constraint factor]~[original shape size information * (2-constraint factor)].
Be S ∈ [S 0* θ, S 0* (2-θ)], wherein, constraint factor θ=0.9.
More preferably, if when the shape size information obtaining does not meet the condition of the shape size of setting, the positional information obtaining is adjusted, specifically comprise:
First, when the shape size information obtaining is less than [original shape size information * constraint factor], by the quotient of each positional information increase [original shape size information * constraint factor] obtaining and the shape size information obtaining.
For example: work as S ibe less than S 0* θ, illustrates that the shape of current characteristic information is too small, calls Scale (shape i, S 0* θ/S i) shape size is retrained.
Wherein: Scale ( shape , s ) = Σ i = 1 n ( x i * s , y i * s ) , s=S 0*θ/S i
Secondly, when the shape size information obtaining is greater than [original shape size information * (2-constraint factor)], by the quotient of each positional information increase [original shape size information * (2-constraint factor)] obtaining and the shape size information obtaining.
For example: if S ibe greater than S 0* (2-θ), illustrates that the shape of current characteristic information is excessive, calls Scale (shape i, S 0* (2-θ)/S i) shape size is retrained.
Wherein: Scale ( shape , s ) = Σ i = 1 n ( x i * s , y i * s ) , s=S 0*(2-θ)/S i
By the scheme of the embodiment of the present invention one, the picture frame that comprises destination object is divided, determine the destination object region in described picture frame, utilize characteristic information sorter to scan definite destination object region, determine the center position information in the shared region of characteristic information to be positioned in described destination object region, definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, improved the precision of first location, for subsequent fine, determined position and make place mat; And the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtain the characteristic information of described destination object, the further like this degree of accuracy that the characteristic information of destination object is positioned that improved.
Embodiment bis-:
As shown in Figure 4, be the schematic flow sheet of a kind of method that the characteristic information of destination object is positioned of the embodiment of the present invention two, the embodiment of the present invention two is and the localization method of the embodiment of the present invention one under same design, specifically comprises:
Step 201: the picture frame that comprises destination object is divided, determine the destination object region in described picture frame, and definite destination object region is judged, if described destination object region is the destination object region of location feature information, perform step 202; Otherwise, execution step 206.
Wherein, in a destination object region, comprised a destination object.
Particularly, in step 201, the picture frame that comprises destination object is divided, being comprised:
First, utilize adaboost algorithm to calculate the picture frame receiving, obtain the area information in each shared region of destination object in this picture frame.
Wherein, described area information has comprised the size information in positional information and the shared region of destination object.
Suppose destination object behaviour face information to be positioned, the picture frame receiving is video frame image, utilize adaboost algorithm to detect the video frame image receiving, obtain the area information of people's face of comprising in described video frame image: face Rects={face Rect 1, face Rect 2... face Rect i..., face Rect n, wherein, face Rect i={ x i, y i, width i, heigh iin { x i, y ibe the positional information of the area information of i people's face, { width i, heigh iit is the size information in i the shared region of people's face.
Secondly, utilize the positional information in the shared region of described destination object and the size information in the shared region of described destination object, calculate the center point coordinate information in the shared region of described destination object.
Also with the above-mentioned example that is assumed to be, according to { the x in i the shared region of destination object i, y iand { the width in the shared region of described destination object i, heigh i, the center point coordinate information that calculates i the shared region of destination object is face Rect icenter point coordinate information be
The 3rd, the center point coordinate information in the shared region of destination object of the information of location feature of the center point coordinate information calculating and storage is carried out to Euclidean distance calculating, and the numerical value of the distance value calculating and setting is compared.
Wherein, the center point coordinate information in the shared region of destination object of the location feature of described storage is to obtain while positioning according to the characteristic information that the former frame picture frame of the picture frame with respect to receiving is carried out to destination object.
Particularly, also with the above-mentioned example that is assumed to be, the characteristic information in the shared region of people's face of the location feature of storage is feature=(x 1, y 1, x 2, y 2..., x m, y m), the center point coordinate information that calculates the shared region of destination object of location feature is (x ', y '),
The 4th, according to comparative result, determine that the destination object region marking off in the image frame information receive is the destination object region of no-fix characteristic information or the destination object region of location feature information.
Particularly, when the distance value calculating is greater than the numerical value of setting, determine that the destination object region marking off in the image frame information receiving is the destination object region of no-fix characteristic information;
When the distance value calculating is not more than the numerical value of setting, determine that the destination object region marking off in the image frame information receiving is the destination object region of location feature information.
Step 202: the destination object area zoom of the information of location feature of determining is big or small to N*N, and the shared area zoom of destination object of the information of location feature of storage is big or small to N*N.
Step 203: the similarity T between the destination object region that calculative determination goes out and the destination object region of storage.
Particularly, the destination object region that calculative determination goes out in the following manner and the similarity T between the destination object region of storage:
T = Σ i = 1 N Σ j = 1 N f ( x i , y j ) ;
Wherein, p (x, y) and q (x, y) are respectively the destination object region of storage and the destination object region of determining, and ε is constant, and scope is [0,255], and ε value is larger, and the similar number s in two regions is larger.
Step 204: whether the similarity that judgement calculates is less than the similarity value of setting, the destination object region that if so, identification is determined and the destination object region of storage are same destination object regions; Otherwise, assert that the destination object region of determining is the destination object region that need to reorientate characteristic information, continue execution step 206.
Step 205: utilize the light stream value of each characteristic information of locating in the adjacent last video image frame information of location feature information, calculate in the image frame information receiving the light stream value of each characteristic information in the destination object region of location feature information.
Particularly, by following LK optical flow algorithm, calculate:
I x 1 I y 1 I x 2 I y 2 · · · · · · u v = - I t 1 I t 2 · · · , Order A = I x 1 I y 1 I x 2 I y 2 · · · · · · , b → = - I t 1 I t 2 · · · , ? u v = ( A T A ) - 1 A T b → ,
Wherein, I x1and I y1be illustrated in present image frame information picture point (x in destination object region 1, y 1) locate the gradient of x direction and the gradient of y direction, I tthe gradient that represents the time orientation of present image frame information, before and after position corresponding to picture point poor, u and v are the light stream values of each characteristic information to be asked, u represents the light stream speed of horizontal direction, v represents the light stream speed of vertical direction.
More preferably, after obtaining the light stream value of each characteristic information, utilize the mode of ballot, the positional information of the characteristic information obtaining is adjusted.
Particularly, there is the problems such as trueness error, morbid state, stability in the light stream value calculating due to single-point, by calculating the surrounding pixel of this characteristic information point, can avoid these problems, calculate after the light stream value of this characteristic information point and neighboring pixel point, by the mode of ballot, select angle that poll is maximum and length as the positional information of this characteristic information.
Step 206: utilize characteristic information sorter to scan the destination object region of described no-fix characteristic information, determine the center position information in the shared region of characteristic information to be positioned in this destination object region.
Step 207: definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in this destination object region.
Step 208: the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region.
Step 209: the positional information obtaining is integrated, obtained the characteristic information of this destination object.
Embodiment tri-:
As shown in Figure 5, structural representation for a kind of equipment that the characteristic information of destination object is positioned of the embodiment of the present invention three, described equipment comprises divides module 11, center position information determination module 12, zone position information determination module 13 and characteristic information integrate module 14, wherein:
Divide module 11, for the picture frame that comprises destination object is divided, determine the destination object region in described picture frame;
Center position information determination module 12, for utilizing characteristic information sorter to scan definite destination object region, determines the center position information in the shared region of characteristic information to be positioned in described destination object region;
Zone position information determination module 13, for definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region;
Characteristic information integrate module 14, for positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtains the characteristic information of described destination object.
Particularly, described division module 11, specifically for utilizing adaboost algorithm to calculate the picture frame receiving, obtain the area information in each shared region of destination object in described picture frame, wherein, described area information has comprised the size information in positional information and the shared region of destination object;
Utilize the positional information in the shared region of described destination object and the size information in the shared region of described destination object, calculate the center point coordinate information in the shared region of described destination object;
The center point coordinate information in the shared region of destination object of the information of location feature of the center point coordinate information in the shared region of described destination object calculating and storage is carried out to Euclidean distance calculating, and the numerical value of the distance value calculating and setting is compared, wherein, the center point coordinate information in the shared region of destination object of the location feature of described storage is to obtain according to the characteristic information positioning result of former frame image;
When the distance value calculating is greater than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of no-fix characteristic information;
When the distance value calculating is not more than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of location feature information.
Particularly, described characteristic information integrate module 14, specifically for according to positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining, determines the shape information in described destination object region;
According to the corresponding relation between the shape information of default characteristic information and shape size information, the shape size information corresponding to shape information in the described destination object region that obtains determining;
Whether the shape size information corresponding to shape information in the described destination object region that judgement obtains meets the condition of the shape size of setting, if shape size information corresponding to the shape information in the described destination object region obtaining meets while imposing a condition, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, determined the characteristic information of described destination object;
When if shape size information corresponding to the shape information in the described destination object region obtaining does not meet the condition of the shape size of setting, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining adjusted, and continued to carry out the operation of determining the shape information in described destination object region according to the positional information after adjusting.
Described characteristic information integrate module 14, specifically for setting up in the following manner the shape information shape of characteristic information and the corresponding relation between shape size information S:
S ( shape ) = Σ i = 1 n ( ( x i - x ′ ) 2 + ( y i - y ′ ) 2 ;
Wherein, shape={ (x 1, y 1), (x 2, y 2) ..., x n(y n, { (x 1, y 1), (x 2, y 2) ..., (x n, y n) be the positional information obtaining, x ′ = Σ i = 1 n x i n , y ′ = Σ i = 1 n y i n .
More preferably, the condition of the shape size of described setting is: S ∈ [S 0* θ, S 0* (2-θ)], wherein, constraint factor θ=0.9;
Described characteristic information integrate module 14, is less than S specifically for the S when obtaining 0* during θ, each positional information obtaining is increased to S 0* θ/S;
When the S obtaining is greater than S 0* when (2-θ), each positional information obtaining is increased to S 0* (2-θ)/S.
More preferably, described equipment also comprises: Region Matching module 15, wherein:
Region Matching module 15, for after determining that the definite described destination object region of picture frame receiving is the destination object region of location feature information, the destination object area zoom of the information of location feature of determining is big or small to N*N, and the shared area zoom of destination object of the information of location feature of storage is big or small to N*N; Similarity T between the destination object region that calculative determination goes out and the destination object region of storage; Whether the similarity that judgement calculates is less than the similarity value of setting, and the destination object region that if so, identification is determined and the destination object region of storage are same destination object regions;
Otherwise, assert that the destination object region of determining is the destination object region that need to reorientate characteristic information, wherein, N is natural number.
It will be understood by those skilled in the art that embodiments of the invention can be provided as method, device (equipment) or computer program.Therefore, the present invention can adopt complete hardware implementation example, implement software example or in conjunction with the form of the embodiment of software and hardware aspect completely.And the present invention can adopt the form that wherein includes the upper computer program of implementing of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code one or more.
The present invention is with reference to describing according to process flow diagram and/or the block scheme of the method for the embodiment of the present invention, device (equipment) and computer program.Should understand can be in computer program instructions realization flow figure and/or block scheme each flow process and/or the flow process in square frame and process flow diagram and/or block scheme and/or the combination of square frame.Can provide these computer program instructions to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, the instruction of carrying out by the processor of computing machine or other programmable data processing device is produced for realizing the device in the function of flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame appointments.
These computer program instructions also can be stored in energy vectoring computer or the computer-readable memory of other programmable data processing device with ad hoc fashion work, the instruction that makes to be stored in this computer-readable memory produces the manufacture that comprises command device, and this command device is realized the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make to carry out sequence of operations step to produce computer implemented processing on computing machine or other programmable devices, thereby the instruction of carrying out is provided for realizing the step of the function of appointment in flow process of process flow diagram or a plurality of flow process and/or square frame of block scheme or a plurality of square frame on computing machine or other programmable devices.
Although described the preferred embodiments of the present invention, once those skilled in the art obtain the basic creative concept of cicada, can make other change and modification to these embodiment.So claims are intended to all changes and the modification that are interpreted as comprising preferred embodiment and fall into the scope of the invention.
Obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if within of the present invention these are revised and modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification interior.

Claims (11)

1. the method characteristic information of destination object being positioned, is characterized in that, comprising:
The picture frame that comprises destination object is divided, determined the destination object region in described picture frame;
Utilize characteristic information sorter to scan definite destination object region, determine the center position information in the shared region of characteristic information to be positioned in described destination object region;
Definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region;
Positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtained the characteristic information of described destination object.
2. the method for claim 1, is characterized in that, the picture frame that comprises destination object is divided, and determines destination object region, specifically comprises:
Utilize adaboost algorithm to calculate the picture frame receiving, obtain the area information in each shared region of destination object in described picture frame, wherein, described area information has comprised the size information in positional information and the shared region of destination object;
Utilize the positional information in the shared region of described destination object and the size information in the shared region of described destination object, calculate the center point coordinate information in the shared region of described destination object;
The center point coordinate information in the shared region of destination object of the information of location feature of the center point coordinate information in the shared region of described destination object calculating and storage is carried out to Euclidean distance calculating, and the numerical value of the distance value calculating and setting is compared, wherein, the center point coordinate information in the shared region of destination object of the location feature of described storage is to obtain according to the characteristic information positioning result of former frame image;
When the distance value calculating is greater than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of no-fix characteristic information;
When the distance value calculating is not more than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of location feature information.
3. the method for claim 1, is characterized in that, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, and obtains the characteristic information of described destination object, specifically comprises:
According to positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining, determine the shape information in described destination object region;
According to the corresponding relation between the shape information of default characteristic information and shape size information, the shape size information corresponding to shape information in the described destination object region that obtains determining;
Whether the shape size information corresponding to shape information in the described destination object region that judgement obtains meets the condition of the shape size of setting, if shape size information corresponding to the shape information in the described destination object region obtaining meets while imposing a condition, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, determined the characteristic information of described destination object;
When if shape size information corresponding to the shape information in the described destination object region obtaining does not meet the condition of the shape size of setting, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining adjusted, and continued to carry out the operation of determining the shape information in described destination object region according to the positional information after adjusting.
4. method as claimed in claim 3, is characterized in that, the corresponding relation between the shape information of default characteristic information and shape size information, specifically comprises:
Set up in the following manner the shape information shape of characteristic information and the corresponding relation between shape size information S:
S ( shape ) = Σ i = 1 n ( ( x i - x ′ ) 2 + ( y i - y ′ ) 2 ;
Wherein, shape={ (x 1, y 1), (x 2, y 2) ..., (x n, y n), { (x 1, y 1), (x 2, y 2) ..., (x n, y n) be the positional information obtaining, x ′ = Σ i = 1 n x i n , y ′ = Σ i = 1 n y i n .
5. method as claimed in claim 4, is characterized in that, the condition of the shape size of described setting is: S ∈ [S 0* θ, S 0* (2-θ)], wherein, constraint factor θ=0.9;
When if shape size information corresponding to the shape information in the described destination object region obtaining does not meet the condition of the shape size of setting, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining adjusted, specifically comprised:
When the S obtaining is less than S 0* during θ, each positional information obtaining is increased to S 0* θ/S;
When the S obtaining is greater than S 0* when (2-θ), each positional information obtaining is increased to S 0* (2-θ)/S.
6. method as claimed in claim 2, is characterized in that, after the described destination object region of determining in determining the picture frame receiving is the destination object region of location feature information, described method also comprises:
The destination object area zoom of the information of location feature of determining is big or small to N*N, and the shared area zoom of destination object of the information of location feature of storage is big or small to N*N, and wherein, N is natural number;
Similarity T between the destination object region that calculative determination goes out and the destination object region of storage;
Whether the similarity that judgement calculates is less than the similarity value of setting, and the destination object region that if so, identification is determined and the destination object region of storage are same destination object regions;
Otherwise, assert that the destination object region of determining is the destination object region that need to reorientate characteristic information.
7. the equipment characteristic information of destination object being positioned, is characterized in that, comprising:
Divide module, for the picture frame that comprises destination object is divided, determine the destination object region in described picture frame;
Center position information determination module, for utilizing characteristic information sorter to scan definite destination object region, determines the center position information in the shared region of characteristic information to be positioned in described destination object region;
Zone position information determination module, for definite center position information is carried out to affined transformation computing, obtain initial position message corresponding to the shared region of characteristic information to be positioned in described destination object region, and the described initial position message obtaining is carried out to iterative processing, obtain positional information corresponding to the shared region of characteristic information to be positioned in described destination object region;
Characteristic information integrate module, for positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, obtains the characteristic information of described destination object.
8. equipment as claimed in claim 7, is characterized in that,
Described division module, specifically for utilizing adaboost algorithm to calculate the picture frame receiving, obtain the area information in each shared region of destination object in described picture frame, wherein, described area information has comprised the size information in positional information and the shared region of destination object;
Utilize the positional information in the shared region of described destination object and the size information in the shared region of described destination object, calculate the center point coordinate information in the shared region of described destination object;
The center point coordinate information in the shared region of destination object of the information of location feature of the center point coordinate information in the shared region of described destination object calculating and storage is carried out to Euclidean distance calculating, and the numerical value of the distance value calculating and setting is compared, wherein, the center point coordinate information in the shared region of destination object of the location feature of described storage is to obtain according to the characteristic information positioning result of former frame image;
When the distance value calculating is greater than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of no-fix characteristic information;
When the distance value calculating is not more than the numerical value of setting, determine that the described destination object region of determining in the picture frame receiving is the destination object region of location feature information.
9. equipment as claimed in claim 7, is characterized in that,
Described characteristic information integrate module, specifically for according to positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining, determines the shape information in described destination object region;
According to the corresponding relation between the shape information of default characteristic information and shape size information, the shape size information corresponding to shape information in the described destination object region that obtains determining;
Whether the shape size information corresponding to shape information in the described destination object region that judgement obtains meets the condition of the shape size of setting, if shape size information corresponding to the shape information in the described destination object region obtaining meets while imposing a condition, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining integrated, determined the characteristic information of described destination object;
When if shape size information corresponding to the shape information in the described destination object region obtaining does not meet the condition of the shape size of setting, positional information corresponding to the shared region of characteristic information to be positioned in the described destination object region obtaining adjusted, and continued to carry out the operation of determining the shape information in described destination object region according to the positional information after adjusting.
10. equipment as claimed in claim 9, is characterized in that,
Described characteristic information integrate module, specifically for setting up in the following manner the shape information shape of characteristic information and the corresponding relation between shape size information S:
S ( shape ) = Σ i = 1 n ( ( x i - x ′ ) 2 + ( y i - y ′ ) 2 ;
Wherein, shape={ (x 1, y 1), (x 2, y 2) ..., (x n, y n), { (x 1, y 1), (x 2, y 2) ..., (x n, y n) be the positional information obtaining, x ′ = Σ i = 1 n x i n , y ′ = Σ i = 1 n y i n .
11. equipment as claimed in claim 10, is characterized in that, the condition of the shape size of described setting is: S ∈ [S 0* θ, S 0* (2-θ)], wherein, constraint factor θ=0.9;
Described characteristic information integrate module, is less than S specifically for the S when obtaining 0* during θ, each positional information obtaining is increased to S 0* θ/S;
When the S obtaining is greater than S 0* when (2-θ), each positional information obtaining is increased to S 0* (2-θ)/S.
CN201310177961.5A 2013-05-13 2013-05-13 Method and device for positioning feature information of target object Active CN104156689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310177961.5A CN104156689B (en) 2013-05-13 2013-05-13 Method and device for positioning feature information of target object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310177961.5A CN104156689B (en) 2013-05-13 2013-05-13 Method and device for positioning feature information of target object

Publications (2)

Publication Number Publication Date
CN104156689A true CN104156689A (en) 2014-11-19
CN104156689B CN104156689B (en) 2017-03-22

Family

ID=51882186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310177961.5A Active CN104156689B (en) 2013-05-13 2013-05-13 Method and device for positioning feature information of target object

Country Status (1)

Country Link
CN (1) CN104156689B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105098651A (en) * 2014-12-26 2015-11-25 天津航天中为数据系统科技有限公司 Power transmission line insulator positioning method and system
CN107427270A (en) * 2015-02-23 2017-12-01 西门子保健有限责任公司 The method and system being automatically positioned for medical diagnosis device
CN108073942A (en) * 2016-11-16 2018-05-25 三星电子株式会社 The method and apparatus for performing material identification and the training for material identification
CN110110110A (en) * 2018-02-02 2019-08-09 杭州海康威视数字技术股份有限公司 One kind is to scheme to search drawing method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102298778A (en) * 2003-10-30 2011-12-28 日本电气株式会社 Estimation system, estimation method, and estimation program for estimating object state
CN102855629A (en) * 2012-08-21 2013-01-02 西华大学 Method and device for positioning target object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298778A (en) * 2003-10-30 2011-12-28 日本电气株式会社 Estimation system, estimation method, and estimation program for estimating object state
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102855629A (en) * 2012-08-21 2013-01-02 西华大学 Method and device for positioning target object

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105098651A (en) * 2014-12-26 2015-11-25 天津航天中为数据系统科技有限公司 Power transmission line insulator positioning method and system
CN107427270A (en) * 2015-02-23 2017-12-01 西门子保健有限责任公司 The method and system being automatically positioned for medical diagnosis device
CN107427270B (en) * 2015-02-23 2021-01-05 西门子保健有限责任公司 Method and system for automatic positioning of medical diagnostic devices
CN108073942A (en) * 2016-11-16 2018-05-25 三星电子株式会社 The method and apparatus for performing material identification and the training for material identification
CN110110110A (en) * 2018-02-02 2019-08-09 杭州海康威视数字技术股份有限公司 One kind is to scheme to search drawing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104156689B (en) 2017-03-22

Similar Documents

Publication Publication Date Title
US20210056293A1 (en) Face detection method
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN106504233B (en) Unmanned plane inspection image electric power widget recognition methods and system based on Faster R-CNN
CN107292242B (en) Iris identification method and terminal
CN101142584B (en) Method for facial features detection
CN104517104A (en) Face recognition method and face recognition system based on monitoring scene
CN110084299B (en) Target detection method and device based on multi-head fusion attention
CN103455794B (en) A kind of dynamic gesture identification method based on frame integration technology
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
WO2012101962A1 (en) State-of-posture estimation device and state-of-posture estimation method
CN103914676A (en) Method and apparatus for use in face recognition
CN101339661B (en) Real time human-machine interaction method and system based on moving detection of hand held equipment
CN103383700B (en) Based on the edge direction histogrammic image search method of difference
CN105069809A (en) Camera positioning method and system based on planar mixed marker
CN109325456A (en) Target identification method, device, target identification equipment and storage medium
CN104268932A (en) 3D facial form automatic changing method and system
CN104050448A (en) Human eye positioning method and device and human eye region positioning method and device
Huang et al. Correlation and local feature based cloud motion estimation
CN105787876A (en) Panorama video automatic stitching method based on SURF feature tracking matching
CN104156689A (en) Method and device for positioning feature information of target object
CN103886324B (en) Scale adaptive target tracking method based on log likelihood image
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN104778697A (en) Three-dimensional tracking method and system based on fast positioning of image dimension and area
CN111209811A (en) Method and system for detecting eyeball attention position in real time
CN103093226B (en) A kind of building method of the RATMIC descriptor for characteristics of image process

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant