CN109146927A - A kind of picture charge pattern localization method and system based on machine vision - Google Patents

A kind of picture charge pattern localization method and system based on machine vision Download PDF

Info

Publication number
CN109146927A
CN109146927A CN201811038421.8A CN201811038421A CN109146927A CN 109146927 A CN109146927 A CN 109146927A CN 201811038421 A CN201811038421 A CN 201811038421A CN 109146927 A CN109146927 A CN 109146927A
Authority
CN
China
Prior art keywords
image
frame difference
frame
transfer matrix
zoom
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811038421.8A
Other languages
Chinese (zh)
Other versions
CN109146927B (en
Inventor
赵定金
朱正辉
张常华
明德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Baolun Electronics Co ltd
Original Assignee
Guangzhou Baolun Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baolun Electronics Co Ltd filed Critical Guangzhou Baolun Electronics Co Ltd
Priority to CN201811038421.8A priority Critical patent/CN109146927B/en
Publication of CN109146927A publication Critical patent/CN109146927A/en
Application granted granted Critical
Publication of CN109146927B publication Critical patent/CN109146927B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to image trace fields, and in particular to a kind of picture charge pattern localization method and system based on machine vision.The gray level image for the former frame that full-view camera is shot and the gray level image of present frame carry out inter-frame difference, obtain frame difference image;Morphological erosion is carried out with the first kernel to frame difference image, the image after being corroded carries out morphological dilations, acquisition frame difference image with the image after verification corrosion in second;All outer profiles take maximum profile in detection frame difference image, as the moving target that detected;Coordinate conversion is carried out to external rectangular centre with coordinate transfer matrix, obtains the shift position of cradle head camera;External rectangular centre is converted with zoom transfer matrix, obtains Zoom factors;The gray level image of former frame is replaced with to the gray level image of present frame, repeats the first step.By the method for the invention and system, it is low that image tracking algorithm computation complexity may be implemented, treatment effeciency is high, and also improves the expansibility of product.

Description

A kind of picture charge pattern localization method and system based on machine vision
Technical field
The present invention relates to image trace fields, and in particular to a kind of picture charge pattern localization method based on machine vision and is System.
Background technique
Image trace technology, referring to will be shot by certain mode (such as machine vision, infrared, ultrasonic wave) in camera To object positioned, and camera is commanded to track the object, the object is allowed to be always held at camera view Technology in range.Image tracking system is widely used in each row such as education, meeting, medical treatment, court's trial and safety monitoring Industry.Wherein, applied to education and meeting in terms of full-automatic tracking photographic schemes, even more led domestic and international full-automatic tracking Solid technical foundation has been laid in the technology trend of shooting, the full-automatic production for Classic Course, video conference.
Existing image trace technology mostly uses greatly machine vision method, i.e. moving object detection.Common moving target Detection algorithm has frame differential method, background subtraction, optical flow method etc..These methods are all the contextual informations according to video frame It is identified and is detected.Wherein, frame differential method be it is a kind of by two frame adjacent in sequence of video images make calculus of differences come The method for obtaining moving target profile.This method principle is simple, and since the time interval of consecutive frame is smaller, becomes to slow The environment light of change is insensitive.
Related product on the market at present, mostly uses greatly and moving object detection algorithm is operated in the side on embedded device Case.But due to the computational load bottleneck of the higher computation complexity of detection algorithm and embedded type CPU, moving object detection algorithm meeting There is higher CPU usage, leverages the development cost and product effect of product.
Moreover, nowadays the image trace function of most products can be bundled with fixed camera, Yong Huwu Method voluntarily selects camera model, and product lacks certain expansibility.
Summary of the invention
In view of image tracking method CPU expends high, product expansion difference problem defect, the present invention in the prior art above A kind of picture charge pattern localization method and system based on machine vision is provided, may be implemented to be tracked target image positioning with And track up.
The present invention is realized especially by following technical scheme:
A kind of picture charge pattern localization method based on machine vision, the method specifically comprise the following steps:
Step S01: the gray level image for the former frame that shooting is obtained and the gray level image of present frame carry out inter-frame difference, obtain Obtain frame difference image;
Step S02: morphological erosion is carried out with the first kernel to frame difference image, the image after being corroded, in second Image after verification corrosion carries out morphological dilations, the frame difference image after being expanded;
The kernel is the rectangle neck of the concept in morphological image process, generally a territory, such as 3*3 Domain range, the rectangle territory of 8*8.
Step S03: all outer profiles in the frame difference image after detection expansion obtain a series of continuous profiles, take wherein Maximum profile, as the moving target that detected;
Step S04: taking the boundary rectangle center of maximum profile, is sat with coordinate transfer matrix to external rectangular centre Mark conversion, obtains required shift position;External rectangular centre is converted with zoom transfer matrix, obtains Zoom factors;
The gray level image of former frame: being replaced with the gray level image of present frame by step S05, repeats step S01.
Further, in step S01, the carry out inter-frame difference is specific as follows:
Wherein, Id(x, y) is frame difference image;Thr is differential threshold, and abs is to take absolute value;Ip(x, y) is former frame gray scale Image;Ic(x, y) is the gray level image of present frame;The differential threshold is used for the susceptibility of control algolithm.
Further, in step S02, the size of second kernel is bigger than the first kernel, that is to say, that combine Actual scene is debugged, and kernel is big or small range is needed by being chosen according to practical debugging effect.
The morphological erosion, specific as follows:
The morphological dilations, specific as follows:
Wherein: Idc(x, y) is the image after corrosion;Idd(x, y) is the frame difference image after expansion;Further, in step In S03, further include the steps that following contour detecting:
Step S31, the frame difference image after progressive scan expansion, until finding non-zero point, it is boundary starting point that the point, which is arranged,;
Step S32 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction;
Step S33 repeats step S32, until returning to boundary starting point, obtains an integrity profile.
Pixels in profile all in frame difference image after expansion are set to 0, repeat step S31, directly by step S34 Non-zero point is not present in frame difference image after to expansion.
Further, in step S04, the coordinate transfer matrix is the matrix of 3*3, if coordinate transfer matrixThen pmThe calculation formula of (v, w) specifically:
Wherein: v represents pmAbscissa, w represents pmOrdinate;
The zoom transfer matrix is the matrix of 1*3, if zoom transfer matrix S=[s1 s2 s3], then Zoom factors β Calculation formula specifically:
β=s1*x+s2*v+s3 (6)。
Further, in step S04, the coordinate transfer matrix M and zoom transfer matrix S are by taking the photograph to panorama As head and cradle head camera carry out calibration generation, the specific steps are as follows:
Step S41 chooses four vertex in the image trace region of full-view camera, respectively
pc1(x1, y1),pc2(x2, y2),pc3(x3, y3),pc4(x4, y4);
Step S42 adjusts the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, obtain The holder camera site p of this corresponding four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) and Zoom factors β1, β2, β3, β4
Step S43, by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) and this four points holder camera site pm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) substitute into following perspective Transformation for mula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrixValue;
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) substitute into following transformation for mula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3] value.
To achieve the above object, the picture charge pattern positioning system based on machine vision that the present invention also provides a kind of, it is described System include:
Panoramic shooting head unit for obtaining shooting video image data, and is sent to control unit;
Full-view camera is provided in the panoramic shooting head unit, the full-view camera is can not zoom and shifting It is dynamic, pan-shot can be carried out, and the video image real-time Transmission that shooting is obtained is to control unit;
Cradle head camera unit for obtaining track up target data, and carries out target following shooting;
Cradle head camera is provided in the cradle head camera unit, the cradle head camera is varifocal and moves It is dynamic, it is controlled by control system by control protocol, carries out target following shooting;
Control unit, for obtaining video image data, and the gray level image and present frame of the former frame that shooting is obtained Gray level image carry out inter-frame difference;Morphological erosion is carried out by the first kernel to frame difference image, is corroded with checking in second Image afterwards carries out morphological dilations;
All outer profiles, take the boundary rectangle center of maximum profile in frame difference image after detection expansion, use Coordinate transfer matrix carries out coordinate conversion to external rectangular centre, obtains required shift position;Pass through zoom transfer matrix pair again Boundary rectangle center is converted, and Zoom factors are obtained, and then the gray level image of former frame is replaced with to the grayscale image of present frame Picture, while the tracking target shift position that will test out and Zoom factors are transmitted to cradle head camera unit, specifically, control The tracking position of object and Zoom factors that unit will test out are transmitted to cradle head camera unit by communication protocol, thus Cradle head camera unit is allowed to carry out track up to target.
That is, the video image data for receiving panoramic shooting head unit, carries out image trace, and will test out The tracking position of object and Zoom factors come is transferred to cradle head camera unit, it is made to carry out target following shooting.
Specifically, can have through the invention it is following the utility model has the advantages that
By the method for the invention and system, to tracking image object without particular/special requirement, image object is positioned without wearing Tracing and positioning can be realized in equipment;And image tracking algorithm computation complexity of the invention is low, and treatment effeciency is high, can be embedded in Meets the needs of real-time tracking positioning in the limited situation of formula CPU computational load;By the method for the invention and system, to camera Type and spec also without particular/special requirement, substantially increase the expansibility of product.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of picture charge pattern localization method flow diagram based on machine vision of the present invention;
Fig. 2 is a kind of picture charge pattern localization method demarcation flow schematic diagram based on machine vision of the present invention;
Fig. 3 is a kind of picture charge pattern localization method image trace flow chart schematic diagram based on machine vision of the present invention;
Fig. 4 is a kind of picture charge pattern position system unit schematic diagram based on machine vision of the present invention;
Fig. 5 is a kind of picture charge pattern positioning system image tracking system structure chart based on machine vision of the present invention;
The object of the invention is realized, the embodiments will be further described with reference to the accompanying drawings for functional characteristics and advantage.
Specific embodiment
Purposes, technical schemes and advantages to facilitate the understanding of the present invention are clearer, with reference to the accompanying drawing and have The invention will be further described for the embodiment of body, and those skilled in the art can be by content disclosed in the present specification easily Understand further advantage and effect of the invention.
The present invention also can be implemented or be applied by other different specific examples, and the various details in this specification is also Various modifications and change can be carried out without departing from the spirit of the present invention based on different viewpoints and application.
It is to be appreciated that if relating to directionality instruction (such as up, down, left, right, before and after ...) in the embodiment of the present invention, Then directionality instruction be only used for explain under a certain particular pose (as shown in the picture) between each component relative positional relationship, Motion conditions etc., if the particular pose changes, directionality instruction is also correspondingly changed correspondingly.
In addition, being somebody's turn to do " first ", " second " etc. if relating to the description of " first ", " second " etc. in the embodiment of the present invention Description be used for description purposes only, be not understood to indicate or imply its relative importance or implicitly indicate indicated skill The quantity of art feature." first " is defined as a result, the feature of " second " can explicitly or implicitly include at least one spy Sign.It secondly, the technical solution between each embodiment can be combined with each other, but must be with those of ordinary skill in the art's energy Based on enough realizations, when the combination of technical solution appearance is conflicting or cannot achieve, it will be understood that this technical solution In conjunction with being not present, also not the present invention claims protection scope within.
A kind of picture charge pattern localization method based on machine vision, the method specifically comprise the following steps:
Step S01: the gray level image for the former frame that shooting is obtained and the gray level image of present frame carry out inter-frame difference, obtain Obtain frame difference image;
Step S02: morphological erosion is carried out with the first kernel to frame difference image, the image after being corroded, in second Image after verification corrosion carries out morphological dilations, the frame difference image after being expanded;
Step S03: all outer profiles in the frame difference image after detection expansion obtain a series of continuous profiles, take wherein Maximum profile, as the moving target that detected;
Step S04: taking the boundary rectangle center of maximum profile, is sat with coordinate transfer matrix to external rectangular centre Mark conversion, obtains required shift position;External rectangular centre is converted with zoom transfer matrix, obtains Zoom factors;
The gray level image of former frame: being replaced with the gray level image of present frame by step S05, repeats step S01.
Specifically, in step S01, the carry out inter-frame difference is specific as follows:
Wherein, Id(x, y) is frame difference image;Thr is differential threshold, and abs is to take absolute value;That is abs () is in number It learns to represent in formula and take absolute value;Ip(x, y) is former frame gray level image;Ic(x, y) is the gray level image of present frame;Described Differential threshold is used for the susceptibility of control algolithm.
In step S02, the size of second kernel is bigger than the first kernel, that is to say, that combine actual scene It is debugged, kernel is big or small range is needed by being chosen according to practical debugging effect.
The morphological erosion, specific as follows:
The morphological dilations, specific as follows:
Wherein: Idc(x, y) is the image after corrosion;Idd(x, y) is the frame difference image after expansion;
In step S03, further include the steps that following contour detecting:
Step S31, the frame difference image after progressive scan expansion, until finding non-zero point, it is boundary starting point that the point, which is arranged,;
Step S32 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction;
Step S33 repeats step S32, until returning to boundary starting point, obtains an integrity profile.
Pixels in profile all in frame difference image after expansion are set to 0, repeat step S31, directly by step S34 Non-zero point is not present in frame difference image after to expansion.
In step S04, the coordinate transfer matrix is the matrix of 3*3, if coordinate transfer matrixThen pmThe calculation formula of (v, w) specifically:
Wherein: v represents pmAbscissa, w represents pmOrdinate;
The zoom transfer matrix is the matrix of 1*3, if zoom transfer matrix S=[s1 s2 s3], then Zoom factors β Calculation formula specifically:
β=s1*x+s2*y+s3 (6)。
Preferably, the coordinate transfer matrix M and zoom transfer matrix S are by panoramic shooting in step S04 Head and cradle head camera carry out calibration generation, the specific steps are as follows:
Step S41 chooses four vertex in the image trace region of full-view camera, respectively
pc1(x1, y1),pc2(x2, y2),pc3(x3, y3),pc4(x4, y4);
Step S42 adjusts the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, obtain The holder camera site p of this corresponding four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3),
pm4(v4, w4) and Zoom factors β1, β2, β3, β4
Step S43, by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) and this four points holder camera site pm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) substitute into following perspective Transformation for mula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrixValue;
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) substitute into following transformation for mula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3] value.
That is, needing first to obtain coordinate transfer matrix M and zoom transfer matrix before carrying out picture charge pattern positioning S, as shown in Fig. 2, demarcation flow of the invention is as follows:
Step S010: four vertex in the image trace region of full-view camera are selected, are respectively as follows:
pc1(x1, y1),pc2(x2, y2),pc3(x3, y3),pc4(x4, y4)
Step S020: adjusting the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, obtain The holder camera site p of this corresponding four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) and Zoom factors β1, β2, β3, β4
Step S030: by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) and this four points holder camera site pm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) substitute into following perspective Transformation for mula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrix
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) substitute into following zoom transfer formula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3]。
Demarcation flow described above need to be only executed when system initialization once, to obtain transformation matrix.Completion is taken the photograph After determining as leader, image trace can be carried out.As shown in figure 3, it is image trace flow chart of the invention, it is specific as follows:
Step S001: it is panned gray level image (i.e. the gray level image of present frame) I by full-view camerac(x, y), and Send it to control centre.
Step S002: control centre receives panorama gray level image IcAfter (x, y), to panorama gray level image Ic(x, y are executed Image tracking algorithm gets shift position pmWith Zoom factors β, and cradle head camera is sent it to.
Step S003: cradle head camera receives shift position pmAfter Zoom factors β, tracking and positioning bat can be carried out It takes the photograph.
Specifically, specific step is as follows for image tracking algorithm in the step S002:
Step S0021, the former frame gray level image I that full-view camera is shotpThe grayscale image of (x, y) and present frame As Ic(x, y) carries out inter-frame difference, obtains frame difference image Id(x, y).The inter-frame difference formula of use are as follows:
Wherein thr is differential threshold, the susceptibility for control algolithm.
Step S0022, according to formula
With the first kernel ecTo frame difference image Id(x, y) carries out morphological erosion, the image I after being corrodeddc(x, y); According to formula:
With the first kernel edTo the image I after corrosiondc(x, y) carries out morphological dilations, the frame difference figure after being expanded As Idd(x, y).It is required that the second kernel edSize than the first kernel ecGreatly, specific magnitude range is needed according to practical debugging effect Fruit is chosen.
Step S0023, the frame difference image I after detection expansionddAll outer profiles in (x, y) obtain a series of continuous wheels It is wide.Take maximum profile cmax, as the moving target that detected.Wherein the step of contour detecting includes:
Step S00231, the frame difference image I after progressive scan expansiondd(x, y), until finding non-zero point, it is side that the point, which is arranged, Boundary's starting point.
Step S00232 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction.
Step S00233 repeats step S00232, until returning to boundary starting point, obtains an integrity profile ci
Step S00234, by the frame difference image I after expansionddIt is all in (x, y) to be in profile ciInterior pixel is set to 0, weight Multiple step S00231, the frame difference image I after expansionddNon-zero point is not present in (x, y).
Step S0024 takes maximum profile cmaxBoundary rectangle center pc(x, y), with coordinate transfer matrixTo pc(x, y) carries out coordinate conversion, obtains the shift position p of cradle head cameram(v, w), Calculation formula are as follows:
Wherein: v represents pmAbscissa, w represents pmOrdinate;
With zoom transfer matrix S=[s1 s2 s3] to pc(x, y) is converted, and Zoom factors β, calculation formula are obtained are as follows:
β=s1*x+s2*y+s3 (6)
Step S0025, by the gray level image I of former framep(x, y) replaces with the gray level image I of present framec(x, y) is repeated Step S0021.
To achieve the above object, as shown in figure 4, the present invention also provides a kind of, the picture charge pattern based on machine vision is positioned System, the system include:
Panoramic shooting head unit for obtaining shooting video image data, and is sent to control unit;
Full-view camera is provided in the panoramic shooting head unit, the full-view camera is can not zoom and shifting It is dynamic, pan-shot can be carried out, and the video image real-time Transmission that shooting is obtained is to control unit;
Cradle head camera unit for obtaining track up target data, and carries out target following shooting;
Cradle head camera is provided in the cradle head camera unit, the cradle head camera is varifocal and moves It is dynamic, it is controlled by control system by control protocol, carries out target following shooting;
Control unit, for obtaining video image data, and the gray level image and present frame of the former frame that shooting is obtained Gray level image carry out inter-frame difference;Morphological erosion is carried out by the first kernel to frame difference image, is corroded with checking in second Image afterwards carries out morphological dilations;
All outer profiles, take the boundary rectangle center of maximum profile in frame difference image after detection expansion, use Coordinate transfer matrix carries out coordinate conversion to external rectangular centre, obtains required shift position;Pass through zoom transfer matrix pair again Boundary rectangle center is converted, and Zoom factors are obtained, and then the gray level image of former frame is replaced with to the grayscale image of present frame Picture, while the tracking target shift position that will test out and Zoom factors are transmitted to cradle head camera unit, specifically, control The tracking position of object and Zoom factors that unit will test out are transmitted to cradle head camera unit by communication protocol, thus Cradle head camera unit is allowed to carry out track up to target.
That is, the video image data for receiving panoramic shooting head unit, carries out image trace, and will test out The tracking position of object and Zoom factors come is transferred to cradle head camera unit, it is made to carry out target following shooting.
Specifically, as shown in figure 5, it is a kind of image tracking system structure chart of the invention:
The full-view camera, specially can not zoom and movement, the view that can be carried out pan-shot, and shooting is obtained Frequency image transmitting gives control unit system.
The cradle head camera, it is specially varifocal and mobile, it is controlled by control system by control protocol, into Row target following shooting.
The control module specially can receive the video image of full-view camera, carry out image trace, and will test Tracking position of object and Zoom factors out is transferred to cradle head camera, carries out track up.
By the method for the invention and system, an image trace positioning system can be built, is realized to object real-time tracking The function of shooting, and to tracking image object without particular/special requirement, image object can be realized without wearing positioning device and chase after Track positioning;Image tracking algorithm computation complexity of the invention is low, and treatment effeciency is high, can be limited in embedded type CPU computational load In the case where meet real-time tracking positioning the needs of;By the method for the invention and system, do not have to the type and spec of camera yet There is particular/special requirement, substantially increases the expansibility of product.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously Limitations on the scope of the patent of the present invention therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to guarantor of the invention Protect range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.

Claims (8)

1. a kind of picture charge pattern localization method based on machine vision, which is characterized in that the method specifically comprises the following steps:
Step S01: the gray level image for the former frame that shooting is obtained and the gray level image of present frame carry out inter-frame difference, obtain frame Difference image;
Step S02: carrying out morphological erosion with the first kernel to frame difference image, the image after being corroded, and is checked in second Image after corrosion carries out morphological dilations, the frame difference image after being expanded;
Step S03: all outer profiles in the frame difference image after detection expansion obtain a series of continuous profiles, take wherein maximum Profile, as the moving target that detected;
Step S04: taking the boundary rectangle center of maximum profile, carries out coordinate to external rectangular centre with coordinate transfer matrix and turns It changes, obtains required shift position;External rectangular centre is converted with zoom transfer matrix, obtains Zoom factors;
The gray level image of former frame: being replaced with the gray level image of present frame by step S05, repeats step S01.
2. a kind of picture charge pattern localization method based on machine vision according to claim 1, which is characterized in that in step In S01, the carry out inter-frame difference is specific as follows:
Wherein, Id(x, y) is frame difference image;Thr is differential threshold;Abs is to take absolute value.
3. a kind of picture charge pattern localization method based on machine vision according to claim 1, which is characterized in that in step In S02, the size of second kernel is bigger than the first kernel,
The morphological erosion, specific as follows:
The morphological dilations, specific as follows:
4. a kind of picture charge pattern localization method based on machine vision according to claim 1, which is characterized in that in step In S03, further include the steps that following contour detecting:
Step S31, the frame difference image after progressive scan expansion, until finding non-zero point, it is boundary starting point that the point, which is arranged,;
Step S32 scans adjacent non-zero point, using new non-zero points as sweep starting point in a counterclockwise direction;
Step S33 repeats step S32, until returning to boundary starting point, obtains an integrity profile;
Pixels in profile all in frame difference image after expansion are set to 0, step S31 are repeated, until swollen by step S34 Non-zero point is not present in frame difference image after swollen.
5. a kind of picture charge pattern localization method based on machine vision according to claim 1, which is characterized in that in step In S04, the coordinate transfer matrix is the matrix of 3*3, if coordinate transfer matrixThen Shift position pmThe calculation formula of (v, w) specifically:
The zoom transfer matrix is the matrix of 1*3, if zoom transfer matrix S=[s1 s2 s3], then the meter of Zoom factors β Calculate formula specifically:
β=s1*x+s2*y+s3 (6)。
6. a kind of picture charge pattern localization method based on machine vision according to claim 1, which is characterized in that in step In S04, the coordinate transfer matrix M and zoom transfer matrix S are by marking to full-view camera and cradle head camera Fixed output quota is raw, specifically comprises the following steps:
Step S41 chooses four vertex in the image trace region of full-view camera, respectively pc1(x1, y1),pc2(x2, y2), pc3(x3, y3),pc4(x4, y4);
Step S42 adjusts the camera site of cradle head camera, and shooting focus is directed at p respectivelyc1,pc2,pc3, pc4, corresponded to The holder camera site p of this four pointsm1(v1, w1),pm2(v2, w2),pm3(v3, w3), pm4(v4, w4) and Zoom factors β1, β2, β3, β4
Step S43, by four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) and this The holder camera site p of four pointsm1(v1, w1),pm2(v2, w2), pm3(v3, w3), pm4(v4, w4) substitute into following perspective transform public affairs Formula:
System of linear equations can be obtained, solution can obtain coordinate transfer matrixValue;
By Zoom factors β1, β2, β3, β4With four vertex p in image trace regionc1(x1, y1),pc2(x2, y2),pc3(x3, y3), pc4(x4, y4) substitute into following transformation for mula:
System of linear equations can be obtained, solution can obtain zoom transfer matrix S=[s1 s2 s3] value.
7. a kind of picture charge pattern positioning system based on machine vision, which is characterized in that the system includes:
Panoramic shooting head unit for obtaining shooting video image data, and is sent to control unit;
Cradle head camera unit for obtaining track up target data, and carries out target following shooting;
Control unit, for obtaining video image data, and the ash of the gray level image for the former frame that shooting is obtained and present frame It spends image and carries out inter-frame difference;Morphological erosion is carried out by the first kernel to frame difference image, after verification corrosion in second Image carries out morphological dilations;
All outer profiles, take the boundary rectangle center of maximum profile, use coordinate in frame difference image after detection expansion Transfer matrix carries out coordinate conversion to external rectangular centre, obtains required shift position;Again by zoom transfer matrix to external Rectangular centre is converted, and Zoom factors are obtained, and then the gray level image of former frame is replaced with to the gray level image of present frame, together When the tracking target shift position that will test out and Zoom factors be transmitted to cradle head camera unit.
8. a kind of picture charge pattern positioning system based on machine vision according to claim 7, which is characterized in that
Be provided with full-view camera in the panoramic shooting head unit, the full-view camera be can not zoom and movement, It can carry out pan-shot;
Cradle head camera is provided in the cradle head camera unit, the cradle head camera is varifocal and mobile.
CN201811038421.8A 2018-09-06 2018-09-06 Image tracking and positioning method and system based on machine vision Active CN109146927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811038421.8A CN109146927B (en) 2018-09-06 2018-09-06 Image tracking and positioning method and system based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811038421.8A CN109146927B (en) 2018-09-06 2018-09-06 Image tracking and positioning method and system based on machine vision

Publications (2)

Publication Number Publication Date
CN109146927A true CN109146927A (en) 2019-01-04
CN109146927B CN109146927B (en) 2021-08-27

Family

ID=64827457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811038421.8A Active CN109146927B (en) 2018-09-06 2018-09-06 Image tracking and positioning method and system based on machine vision

Country Status (1)

Country Link
CN (1) CN109146927B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462186A (en) * 2020-04-03 2020-07-28 天津理工大学 Infrared target detection and tracking integrated algorithm based on extension immunity
CN111525957A (en) * 2020-05-12 2020-08-11 浙江大学 Visible light communication automatic capturing, tracking and aiming system based on machine vision
WO2022082711A1 (en) * 2020-10-23 2022-04-28 中科传启(苏州)科技有限公司 Myopia prevention method applicable to electronic device, myopia prevention electronic device, and myopia prevention tablet
CN114897762A (en) * 2022-02-18 2022-08-12 众信方智(苏州)智能技术有限公司 Automatic positioning method and device for coal mining machine on coal mine working face

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070263943A1 (en) * 2006-05-15 2007-11-15 Seiko Epson Corporation Defective image detection method and storage medium storing program
CN101626489A (en) * 2008-07-10 2010-01-13 苏国政 Method and system for intelligently identifying and automatically tracking objects under unattended condition
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting
CN102902945A (en) * 2012-09-28 2013-01-30 南京汇兴博业数字设备有限公司 Distortion correction method of outer contour based on quick response matrix code
CN103024276A (en) * 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN104574359A (en) * 2014-11-03 2015-04-29 南京邮电大学 Student tracking and positioning method based on primary and secondary cameras

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070263943A1 (en) * 2006-05-15 2007-11-15 Seiko Epson Corporation Defective image detection method and storage medium storing program
CN101626489A (en) * 2008-07-10 2010-01-13 苏国政 Method and system for intelligently identifying and automatically tracking objects under unattended condition
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Video monitoring system for multi-target tracking close-up shooting
CN102902945A (en) * 2012-09-28 2013-01-30 南京汇兴博业数字设备有限公司 Distortion correction method of outer contour based on quick response matrix code
CN103024276A (en) * 2012-12-17 2013-04-03 沈阳聚德视频技术有限公司 Positioning and focusing method of pan-tilt camera
CN104574359A (en) * 2014-11-03 2015-04-29 南京邮电大学 Student tracking and positioning method based on primary and secondary cameras

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462186A (en) * 2020-04-03 2020-07-28 天津理工大学 Infrared target detection and tracking integrated algorithm based on extension immunity
CN111462186B (en) * 2020-04-03 2022-04-15 天津理工大学 Infrared target detection and tracking integrated algorithm based on extension immunity
CN111525957A (en) * 2020-05-12 2020-08-11 浙江大学 Visible light communication automatic capturing, tracking and aiming system based on machine vision
CN111525957B (en) * 2020-05-12 2021-12-17 浙江大学 Machine vision-based visible light communication automatic capturing, tracking and aiming method and system
WO2022082711A1 (en) * 2020-10-23 2022-04-28 中科传启(苏州)科技有限公司 Myopia prevention method applicable to electronic device, myopia prevention electronic device, and myopia prevention tablet
CN114897762A (en) * 2022-02-18 2022-08-12 众信方智(苏州)智能技术有限公司 Automatic positioning method and device for coal mining machine on coal mine working face

Also Published As

Publication number Publication date
CN109146927B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN109146927A (en) A kind of picture charge pattern localization method and system based on machine vision
US10586352B2 (en) Camera calibration
CN109166077B (en) Image alignment method and device, readable storage medium and computer equipment
CN103325112B (en) Moving target method for quick in dynamic scene
CN110677599B (en) System and method for reconstructing 360-degree panoramic video image
CN103971375B (en) A kind of panorama based on image mosaic stares camera space scaling method
CN101630406B (en) Camera calibration method and camera calibration device
CN108122191B (en) Method and device for splicing fisheye images into panoramic image and panoramic video
CN111062326B (en) Self-supervision human body 3D gesture estimation network training method based on geometric driving
CN104835117A (en) Spherical panorama generating method based on overlapping way
CN103247024A (en) 180-degree fisheye image spread method based on concentric algorithm and device
CN108073857A (en) The method and device of dynamic visual sensor DVS event handlings
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN103247020A (en) Fisheye image spread method based on radial characteristics
CN104318604A (en) 3D image stitching method and apparatus
CN105100546A (en) Movement estimation method and device
CN104881869A (en) Real time panorama tracing and splicing method for mobile platform
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN111583386B (en) Multi-view human body posture reconstruction method based on label propagation algorithm
CN110991306B (en) Self-adaptive wide-field high-resolution intelligent sensing method and system
CN114998447A (en) Multi-view vision calibration method and system
US8072426B2 (en) Interactive device capable of improving image processing
CN107945166B (en) Binocular vision-based method for measuring three-dimensional vibration track of object to be measured
CN108377368A (en) A kind of one master and multiple slaves formula intelligent video monitoring apparatus and its control method
CN109495694B (en) RGB-D-based environment sensing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Zhao Dingjin

Inventor after: Zhu Zhenghui

Inventor after: Zhang Changhua

Inventor before: Zhao Dingjin

Inventor before: Zhu Zhenghui

Inventor before: Zhang Changhua

Inventor before: Ming De

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No. 56 Nanli East Road, Shiqi Town, Panyu District, Guangzhou City, Guangdong Province, 510000

Patentee after: Guangdong Baolun Electronics Co.,Ltd.

Address before: 510000 Building 1, industrial zone B, Zhongcun street, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU ITC ELECTRONIC TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address