Summary of the invention
In view of image tracking method CPU expends high, product expansion difference problem defect, the present invention in the prior art above
A kind of picture charge pattern localization method and system based on machine vision is provided, may be implemented to be tracked target image positioning with
And track up.
The present invention is realized especially by following technical scheme:
A kind of picture charge pattern localization method based on machine vision, the method specifically comprise the following steps:
Step S01: the gray level image for the former frame that shooting is obtained and the gray level image of present frame carry out inter-frame difference, obtain
Obtain frame difference image;
Step S02: morphological erosion is carried out with the first kernel to frame difference image, the image after being corroded, in second
Image after verification corrosion carries out morphological dilations, the frame difference image after being expanded;
The kernel is the rectangle neck of the concept in morphological image process, generally a territory, such as 3*3
Domain range, the rectangle territory of 8*8.
Step S03: all outer profiles in the frame difference image after detection expansion obtain a series of continuous profiles, take wherein
Maximum profile, as the moving target that detected;
Step S04: taking the boundary rectangle center of maximum profile, is sat with coordinate transfer matrix to external rectangular centre
Mark conversion, obtains required shift position;External rectangular centre is converted with zoom transfer matrix, obtains Zoom factors;
The gray level image of former frame: being replaced with the gray level image of present frame by step S05, repeats step S01.
Further, in step S01, the carry out inter-frame difference is specific as follows:
Wherein, Id(x, y) is frame difference image;Thr is differential threshold, and abs is to take absolute value;Ip(x, y) is former frame gray scale
Image;Ic(x, y) is the gray level image of present frame;The differential threshold is used for the susceptibility of control algolithm.
Further, in step S02, the size of second kernel is bigger than the first kernel, that is to say, that combine
Actual scene is debugged, and kernel is big or small range is needed by being chosen according to practical debugging effect.
The morphological erosion, specific as follows:
The morphological dilations, specific as follows:
Wherein: Idc(x, y) is the image after corrosion;Idd(x, y) is the frame difference image after expansion;Further, in step
In S03, further include the steps that following contour detecting:
Step S31, the frame difference image after progressive scan expansion, until finding non-zero point, it is boundary starting point that the point, which is arranged,;
Step S32 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction;
Step S33 repeats step S32, until returning to boundary starting point, obtains an integrity profile.
Pixels in profile all in frame difference image after expansion are set to 0, repeat step S31, directly by step S34
Non-zero point is not present in frame difference image after to expansion.
Further, in step S04, the coordinate transfer matrix is the matrix of 3*3, if coordinate transfer matrixThen pmThe calculation formula of (v, w) specifically:
Wherein: v represents pmAbscissa, w represents pmOrdinate;
The zoom transfer matrix is the matrix of 1*3, if zoom transfer matrix S=[s1 s2 s3], then Zoom factors β
Calculation formula specifically:
β=s1*x+s2*v+s3 (6)。
Further, in step S04, the coordinate transfer matrix M and zoom transfer matrix S are by taking the photograph to panorama
As head and cradle head camera carry out calibration generation, the specific steps are as follows:
Step S41 chooses four vertex in the image trace region of full-view camera, respectively
pc1(x1, y1),pc2(x2, y2),pc3(x3, y3),pc4(x4, y4);
Step S42 adjusts the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, obtain
The holder camera site p of this corresponding four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) and Zoom factors β1,
β2, β3, β4;
Step S43, by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4,
y4) and this four points holder camera site pm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) substitute into following perspective
Transformation for mula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrixValue;
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3,
y3), pc4(x4, y4) substitute into following transformation for mula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3] value.
To achieve the above object, the picture charge pattern positioning system based on machine vision that the present invention also provides a kind of, it is described
System include:
Panoramic shooting head unit for obtaining shooting video image data, and is sent to control unit;
Full-view camera is provided in the panoramic shooting head unit, the full-view camera is can not zoom and shifting
It is dynamic, pan-shot can be carried out, and the video image real-time Transmission that shooting is obtained is to control unit;
Cradle head camera unit for obtaining track up target data, and carries out target following shooting;
Cradle head camera is provided in the cradle head camera unit, the cradle head camera is varifocal and moves
It is dynamic, it is controlled by control system by control protocol, carries out target following shooting;
Control unit, for obtaining video image data, and the gray level image and present frame of the former frame that shooting is obtained
Gray level image carry out inter-frame difference;Morphological erosion is carried out by the first kernel to frame difference image, is corroded with checking in second
Image afterwards carries out morphological dilations;
All outer profiles, take the boundary rectangle center of maximum profile in frame difference image after detection expansion, use
Coordinate transfer matrix carries out coordinate conversion to external rectangular centre, obtains required shift position;Pass through zoom transfer matrix pair again
Boundary rectangle center is converted, and Zoom factors are obtained, and then the gray level image of former frame is replaced with to the grayscale image of present frame
Picture, while the tracking target shift position that will test out and Zoom factors are transmitted to cradle head camera unit, specifically, control
The tracking position of object and Zoom factors that unit will test out are transmitted to cradle head camera unit by communication protocol, thus
Cradle head camera unit is allowed to carry out track up to target.
That is, the video image data for receiving panoramic shooting head unit, carries out image trace, and will test out
The tracking position of object and Zoom factors come is transferred to cradle head camera unit, it is made to carry out target following shooting.
Specifically, can have through the invention it is following the utility model has the advantages that
By the method for the invention and system, to tracking image object without particular/special requirement, image object is positioned without wearing
Tracing and positioning can be realized in equipment;And image tracking algorithm computation complexity of the invention is low, and treatment effeciency is high, can be embedded in
Meets the needs of real-time tracking positioning in the limited situation of formula CPU computational load;By the method for the invention and system, to camera
Type and spec also without particular/special requirement, substantially increase the expansibility of product.
Specific embodiment
Purposes, technical schemes and advantages to facilitate the understanding of the present invention are clearer, with reference to the accompanying drawing and have
The invention will be further described for the embodiment of body, and those skilled in the art can be by content disclosed in the present specification easily
Understand further advantage and effect of the invention.
The present invention also can be implemented or be applied by other different specific examples, and the various details in this specification is also
Various modifications and change can be carried out without departing from the spirit of the present invention based on different viewpoints and application.
It is to be appreciated that if relating to directionality instruction (such as up, down, left, right, before and after ...) in the embodiment of the present invention,
Then directionality instruction be only used for explain under a certain particular pose (as shown in the picture) between each component relative positional relationship,
Motion conditions etc., if the particular pose changes, directionality instruction is also correspondingly changed correspondingly.
In addition, being somebody's turn to do " first ", " second " etc. if relating to the description of " first ", " second " etc. in the embodiment of the present invention
Description be used for description purposes only, be not understood to indicate or imply its relative importance or implicitly indicate indicated skill
The quantity of art feature." first " is defined as a result, the feature of " second " can explicitly or implicitly include at least one spy
Sign.It secondly, the technical solution between each embodiment can be combined with each other, but must be with those of ordinary skill in the art's energy
Based on enough realizations, when the combination of technical solution appearance is conflicting or cannot achieve, it will be understood that this technical solution
In conjunction with being not present, also not the present invention claims protection scope within.
A kind of picture charge pattern localization method based on machine vision, the method specifically comprise the following steps:
Step S01: the gray level image for the former frame that shooting is obtained and the gray level image of present frame carry out inter-frame difference, obtain
Obtain frame difference image;
Step S02: morphological erosion is carried out with the first kernel to frame difference image, the image after being corroded, in second
Image after verification corrosion carries out morphological dilations, the frame difference image after being expanded;
Step S03: all outer profiles in the frame difference image after detection expansion obtain a series of continuous profiles, take wherein
Maximum profile, as the moving target that detected;
Step S04: taking the boundary rectangle center of maximum profile, is sat with coordinate transfer matrix to external rectangular centre
Mark conversion, obtains required shift position;External rectangular centre is converted with zoom transfer matrix, obtains Zoom factors;
The gray level image of former frame: being replaced with the gray level image of present frame by step S05, repeats step S01.
Specifically, in step S01, the carry out inter-frame difference is specific as follows:
Wherein, Id(x, y) is frame difference image;Thr is differential threshold, and abs is to take absolute value;That is abs () is in number
It learns to represent in formula and take absolute value;Ip(x, y) is former frame gray level image;Ic(x, y) is the gray level image of present frame;Described
Differential threshold is used for the susceptibility of control algolithm.
In step S02, the size of second kernel is bigger than the first kernel, that is to say, that combine actual scene
It is debugged, kernel is big or small range is needed by being chosen according to practical debugging effect.
The morphological erosion, specific as follows:
The morphological dilations, specific as follows:
Wherein: Idc(x, y) is the image after corrosion;Idd(x, y) is the frame difference image after expansion;
In step S03, further include the steps that following contour detecting:
Step S31, the frame difference image after progressive scan expansion, until finding non-zero point, it is boundary starting point that the point, which is arranged,;
Step S32 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction;
Step S33 repeats step S32, until returning to boundary starting point, obtains an integrity profile.
Pixels in profile all in frame difference image after expansion are set to 0, repeat step S31, directly by step S34
Non-zero point is not present in frame difference image after to expansion.
In step S04, the coordinate transfer matrix is the matrix of 3*3, if coordinate transfer matrixThen pmThe calculation formula of (v, w) specifically:
Wherein: v represents pmAbscissa, w represents pmOrdinate;
The zoom transfer matrix is the matrix of 1*3, if zoom transfer matrix S=[s1 s2 s3], then Zoom factors β
Calculation formula specifically:
β=s1*x+s2*y+s3 (6)。
Preferably, the coordinate transfer matrix M and zoom transfer matrix S are by panoramic shooting in step S04
Head and cradle head camera carry out calibration generation, the specific steps are as follows:
Step S41 chooses four vertex in the image trace region of full-view camera, respectively
pc1(x1, y1),pc2(x2, y2),pc3(x3, y3),pc4(x4, y4);
Step S42 adjusts the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, obtain
The holder camera site p of this corresponding four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3),
pm4(v4, w4) and Zoom factors β1, β2, β3, β4;
Step S43, by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4,
y4) and this four points holder camera site pm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) substitute into following perspective
Transformation for mula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrixValue;
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3,
y3), pc4(x4, y4) substitute into following transformation for mula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3] value.
That is, needing first to obtain coordinate transfer matrix M and zoom transfer matrix before carrying out picture charge pattern positioning
S, as shown in Fig. 2, demarcation flow of the invention is as follows:
Step S010: four vertex in the image trace region of full-view camera are selected, are respectively as follows:
pc1(x1, y1),pc2(x2, y2),pc3(x3, y3),pc4(x4, y4)
Step S020: adjusting the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, obtain
The holder camera site p of this corresponding four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) and Zoom factors β1,
β2, β3, β4。
Step S030: by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4,
y4) and this four points holder camera site pm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) substitute into following perspective
Transformation for mula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrix
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3,
y3), pc4(x4, y4) substitute into following zoom transfer formula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3]。
Demarcation flow described above need to be only executed when system initialization once, to obtain transformation matrix.Completion is taken the photograph
After determining as leader, image trace can be carried out.As shown in figure 3, it is image trace flow chart of the invention, it is specific as follows:
Step S001: it is panned gray level image (i.e. the gray level image of present frame) I by full-view camerac(x, y), and
Send it to control centre.
Step S002: control centre receives panorama gray level image IcAfter (x, y), to panorama gray level image Ic(x, y are executed
Image tracking algorithm gets shift position pmWith Zoom factors β, and cradle head camera is sent it to.
Step S003: cradle head camera receives shift position pmAfter Zoom factors β, tracking and positioning bat can be carried out
It takes the photograph.
Specifically, specific step is as follows for image tracking algorithm in the step S002:
Step S0021, the former frame gray level image I that full-view camera is shotpThe grayscale image of (x, y) and present frame
As Ic(x, y) carries out inter-frame difference, obtains frame difference image Id(x, y).The inter-frame difference formula of use are as follows:
Wherein thr is differential threshold, the susceptibility for control algolithm.
Step S0022, according to formula
With the first kernel ecTo frame difference image Id(x, y) carries out morphological erosion, the image I after being corrodeddc(x, y);
According to formula:
With the first kernel edTo the image I after corrosiondc(x, y) carries out morphological dilations, the frame difference figure after being expanded
As Idd(x, y).It is required that the second kernel edSize than the first kernel ecGreatly, specific magnitude range is needed according to practical debugging effect
Fruit is chosen.
Step S0023, the frame difference image I after detection expansionddAll outer profiles in (x, y) obtain a series of continuous wheels
It is wide.Take maximum profile cmax, as the moving target that detected.Wherein the step of contour detecting includes:
Step S00231, the frame difference image I after progressive scan expansiondd(x, y), until finding non-zero point, it is side that the point, which is arranged,
Boundary's starting point.
Step S00232 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction.
Step S00233 repeats step S00232, until returning to boundary starting point, obtains an integrity profile ci。
Step S00234, by the frame difference image I after expansionddIt is all in (x, y) to be in profile ciInterior pixel is set to 0, weight
Multiple step S00231, the frame difference image I after expansionddNon-zero point is not present in (x, y).
Step S0024 takes maximum profile cmaxBoundary rectangle center pc(x, y), with coordinate transfer matrixTo pc(x, y) carries out coordinate conversion, obtains the shift position p of cradle head cameram(v, w),
Calculation formula are as follows:
Wherein: v represents pmAbscissa, w represents pmOrdinate;
With zoom transfer matrix S=[s1 s2 s3] to pc(x, y) is converted, and Zoom factors β, calculation formula are obtained are as follows:
β=s1*x+s2*y+s3 (6)
Step S0025, by the gray level image I of former framep(x, y) replaces with the gray level image I of present framec(x, y) is repeated
Step S0021.
To achieve the above object, as shown in figure 4, the present invention also provides a kind of, the picture charge pattern based on machine vision is positioned
System, the system include:
Panoramic shooting head unit for obtaining shooting video image data, and is sent to control unit;
Full-view camera is provided in the panoramic shooting head unit, the full-view camera is can not zoom and shifting
It is dynamic, pan-shot can be carried out, and the video image real-time Transmission that shooting is obtained is to control unit;
Cradle head camera unit for obtaining track up target data, and carries out target following shooting;
Cradle head camera is provided in the cradle head camera unit, the cradle head camera is varifocal and moves
It is dynamic, it is controlled by control system by control protocol, carries out target following shooting;
Control unit, for obtaining video image data, and the gray level image and present frame of the former frame that shooting is obtained
Gray level image carry out inter-frame difference;Morphological erosion is carried out by the first kernel to frame difference image, is corroded with checking in second
Image afterwards carries out morphological dilations;
All outer profiles, take the boundary rectangle center of maximum profile in frame difference image after detection expansion, use
Coordinate transfer matrix carries out coordinate conversion to external rectangular centre, obtains required shift position;Pass through zoom transfer matrix pair again
Boundary rectangle center is converted, and Zoom factors are obtained, and then the gray level image of former frame is replaced with to the grayscale image of present frame
Picture, while the tracking target shift position that will test out and Zoom factors are transmitted to cradle head camera unit, specifically, control
The tracking position of object and Zoom factors that unit will test out are transmitted to cradle head camera unit by communication protocol, thus
Cradle head camera unit is allowed to carry out track up to target.
That is, the video image data for receiving panoramic shooting head unit, carries out image trace, and will test out
The tracking position of object and Zoom factors come is transferred to cradle head camera unit, it is made to carry out target following shooting.
Specifically, as shown in figure 5, it is a kind of image tracking system structure chart of the invention:
The full-view camera, specially can not zoom and movement, the view that can be carried out pan-shot, and shooting is obtained
Frequency image transmitting gives control unit system.
The cradle head camera, it is specially varifocal and mobile, it is controlled by control system by control protocol, into
Row target following shooting.
The control module specially can receive the video image of full-view camera, carry out image trace, and will test
Tracking position of object and Zoom factors out is transferred to cradle head camera, carries out track up.
By the method for the invention and system, an image trace positioning system can be built, is realized to object real-time tracking
The function of shooting, and to tracking image object without particular/special requirement, image object can be realized without wearing positioning device and chase after
Track positioning;Image tracking algorithm computation complexity of the invention is low, and treatment effeciency is high, can be limited in embedded type CPU computational load
In the case where meet real-time tracking positioning the needs of;By the method for the invention and system, do not have to the type and spec of camera yet
There is particular/special requirement, substantially increases the expansibility of product.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention
Protect range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.