CN103826105A - Video tracking system and realizing method based on machine vision technology - Google Patents

Video tracking system and realizing method based on machine vision technology Download PDF

Info

Publication number
CN103826105A
CN103826105A CN201410094016.3A CN201410094016A CN103826105A CN 103826105 A CN103826105 A CN 103826105A CN 201410094016 A CN201410094016 A CN 201410094016A CN 103826105 A CN103826105 A CN 103826105A
Authority
CN
China
Prior art keywords
target
image
bunch
machine vision
centre
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410094016.3A
Other languages
Chinese (zh)
Inventor
刘紫燕
冯亮
祁佳
罗超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN201410094016.3A priority Critical patent/CN103826105A/en
Publication of CN103826105A publication Critical patent/CN103826105A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a video tracking system based on machine vision technology. The video tracking system comprises a camera, a holder motor, a SDRAM (Synchronous Dynamic random access memory) memory, a display, a holder driving module and a field-programmable gate array (FPGA), and is characterized in that the field-programmable gate array FPGA is connected with the camera, the SDRAM memory, the display and the holder driving module through leads; the holder driving module is connected with the holder motor through a lead. The video tracking system is low in cost, high in flexibility and short in development period, the problems that in the prior art the conventional video tracking system cannot meet effective balance in aspects such as processing speed, efficiency, cost, development difficulty, energy consumption, volume and the like, are solved, and the video tracking system is intelligent and convenient.

Description

A kind of video frequency following system and its implementation based on machine vision technique
Technical field
The invention belongs to vision monitoring technical field, relate in particular to a kind of video frequency following system and its implementation based on machine vision technique.
Background technology
Along with the develop rapidly of computer technology, Digital image technology has had application widely in fields such as industrial production, safety monitoring, consumer electronics, intelligent transportation.Video tracking is the new technology growing up on this basis, it is used for realizing artificial intelligence machine vision, has larger application prospect in fields such as realizing vision manipulator, the sorting of streamline workpiece, faulty goods detection, vehicle tracking, intelligent security guard; The main implementation platform of video frequency following system has at present: PC platform, ARM+DSP platform, ASIC special integrated chip platform, it can use pure software or hardware to realize.Simple and cost-saving in order to develop, generally realize most functions with software.But, these platforms have pluses and minuses separately, video frequency following system becomes a difficult problem because of factors such as its data volume are large and algorithm is complicated, stability requirement is high in addition, and existing video frequency following system is difficult to reach active balance at aspects such as processing speed, efficiency, cost, development difficulty, energy consumption, volumes.
Summary of the invention
The technical problem to be solved in the present invention: a kind of video frequency following system and its implementation based on machine vision technique is provided, is difficult to reach the problems such as active balance with the video frequency following system that solves prior art at aspects such as processing speed, efficiency, cost, development difficulty, energy consumption, volumes.
Technical solution of the present invention:
A kind of video frequency following system based on machine vision technique, it comprises camera, horizontal stage electric machine, SDRAM memory, display and The Cloud Terrace driver module and on-site programmable gate array FPGA, on-site programmable gate array FPGA is connected with camera, SDRAM memory, display and The Cloud Terrace driver module wire, and The Cloud Terrace driver module is connected with horizontal stage electric machine wire.
Camera is cmos image sensor.
Multiport sdram controller is four ports.
The Cloud Terrace driver module comprises photoelectric coupling buffer circuit, and photoelectric coupling buffer circuit is connected with drive amplification circuit lead.
Horizontal stage electric machine is to be responsible for two stepping motors that horizontal and vertical direction is rotated, and is loaded with camera above.
On-site programmable gate array FPGA comprises image capture module, multiport sdram controller, image VGA display module, video image processing module, multiport sdram controller is connected by Digital Logical Circuits with image capture module, image VGA display module and video image processing module, and multiport sdram controller is connected with SDRAM memory wire.
An implementation method for video frequency following system based on machine vision technique, it comprises the steps:
Step 1, the video image of camera is gathered and stored;
Step 2, utilize Hierarchical clustering analysis and frame differential method to detect the video image gathering, extract moving target;
Step 3, utilize centroid method to detect target position, calculate the coordinate of the target place centre of form;
The centre of form movement locus of step 4, calculating tracked target, determines the position that next moment tracked target occurs;
Step 5, calculate the side-play amount of the target centre of form with respect to monitoring display picture center in real time, drive The Cloud Terrace stepping motor, camera is moved with tracked target and rotate, realize the real-time tracking to tracked target.
The video image to camera described in step 1 gathers and stores, and first, passes through I 2c transducer configuration module drives cmos image sensor to make it gather vedio data; Follow cmos sensor data acquisition module receiver, video view data and be converted to RGB color space form; Finally by multiport sdram controller, the vedio data after conversion is saved to SDRAM memory.
Extraction moving target described in step 2; Its method is: the two width images that read be separated by a frame or several frames video sequence from SDRAM memory do difference operation, draw the absolute value of two two field picture luminance differences, judges whether this absolute value is greater than threshold value, is to determine in image sequence, to have object of which movement; Define after moving target, the video image reading is carried out to medium filtering and again gray value is carried out to Hierarchical clustering analysis, finally carry out realize target rim detection by the result of adaptive Threshold Analysis hierarchical clustering.Described realizes target in video image rim detection by Hierarchical clustering analysis, and its method comprises the steps:
1, obtain gray level image, carry out medium filtering, template is set;
2, obtain template center's pixel and its absolute value of all pixel gray value differences around
Figure 2014100940163100002DEST_PATH_IMAGE001
, and be divided into
Figure 72331DEST_PATH_IMAGE002
bunch;
3, will
Figure 595716DEST_PATH_IMAGE001
with the threshold value of setting
Figure 2014100940163100002DEST_PATH_IMAGE003
relatively, the hierarchical clustering that carries out first division:
1) if
Figure 963244DEST_PATH_IMAGE004
, will
Figure DEST_PATH_IMAGE005
putting 1 is classified as
Figure 876973DEST_PATH_IMAGE006
bunch;
2) if
Figure DEST_PATH_IMAGE007
, will set to 0 and be classified as
Figure 998830DEST_PATH_IMAGE008
bunch;
4, will
Figure 770215DEST_PATH_IMAGE006
bunch and
Figure 499136DEST_PATH_IMAGE008
element in bunch merges, and obtains the hierarchical clustering of cohesion
Figure DEST_PATH_IMAGE009
bunch,
Figure 515634DEST_PATH_IMAGE009
element in bunch is
Figure 13611DEST_PATH_IMAGE010
;
5, ask
Figure 457362DEST_PATH_IMAGE009
the gray value of all pixels in bunch
Figure 673580DEST_PATH_IMAGE010
the mean value of gray value
Figure DEST_PATH_IMAGE011
;
6, right
Figure 493768DEST_PATH_IMAGE009
element in bunch carries out ascending sort, obtains new sequence to be
Figure 846252DEST_PATH_IMAGE012
;
7, asking the average of the 3rd and the 4th 's pixel gray value is adaptive threshold value
Figure DEST_PATH_IMAGE013
;
8, will
Figure 230878DEST_PATH_IMAGE009
the gray value of all pixels in bunch
Figure 606496DEST_PATH_IMAGE010
the mean value of gray value
Figure 27113DEST_PATH_IMAGE011
with adaptive threshold value
Figure 499682DEST_PATH_IMAGE013
relatively, carry out the hierarchical clustering of second division:
1) if
Figure 285236DEST_PATH_IMAGE014
, judge that the point in this bunch is marginal point;
2) if , judge that the point in this bunch is not marginal point.
The centroid method that utilizes described in step 3 positions detecting target, calculates the coordinate of the target place centre of form, and its concrete grammar comprises the steps:
1, with a calculator add up the bianry image after target detection
Figure DEST_PATH_IMAGE017
intermediate value is the number of the pixel of " 1 "; 2, with two registers storage pixel value is the coordinate accumulated value of 1 pixel; 3, with two counters
Figure DEST_PATH_IMAGE019
with represent the coordinate figure of current pixel; 4,, after a two field picture has scanned, can obtain the centre of form abscissa of present image
Figure DEST_PATH_IMAGE021
for
Figure 357786DEST_PATH_IMAGE022
, centre of form ordinate
Figure DEST_PATH_IMAGE023
for
Figure 707996DEST_PATH_IMAGE024
.
The centre of form movement locus of the calculating tracked target described in step 4, determines the position that next tracked target occurs in moment, and its method comprises the steps: 1, init state transfer matrix , measure matrix
Figure DEST_PATH_IMAGE025
, error covariance
Figure 589681DEST_PATH_IMAGE026
, the initial position to target, velocity amplitude
Figure DEST_PATH_IMAGE027
, noise covariance
Figure 481151DEST_PATH_IMAGE028
,
Figure DEST_PATH_IMAGE029
calculate; 2, by the state variable of target previous moment and observational variable this moment
Figure DEST_PATH_IMAGE031
bring kalman filtering fundamental equation into, upgrade filter gain
Figure 822451DEST_PATH_IMAGE032
, error covariance
Figure DEST_PATH_IMAGE033
, and future position
Figure 858540DEST_PATH_IMAGE034
; 3, the prior estimate using the target location of prediction as next moment, upgrades kalman filter state, carries out loop iteration.
The real-time calculating target centre of form described in step 5 is with respect to the side-play amount at monitoring display picture center, drives The Cloud Terrace stepping motor, camera is moved with tracked target and rotates, and realizes the real-time tracking to tracked target; Its method is: calculate in real time motion vector (the △ x of the tracked target centre of form with respect to monitoring display picture center, △ y), in the time that △ x, △ y exceed setting range h, v, produce respectively the PWM Electric Machine Control pulse signal of X, Y-direction, and give motor driver by this signal, by drive circuit, deliver to each phase winding of stepping motor, realize level and the vertical direction of horizontal stage electric machine and rotate, make the tracked target center in monitored picture all the time.
Beneficial effect of the present invention:
The realization of video frequency following system of the present invention combines hierarchical clustering and frame differential method, has improved the precision of target detection; Adopt the method for calculating the target centre of form, can find fast the tracked target centre of form; Adopt the potential position of kalman next moment target of filter forecasting, the tracking that can solve moving target under circumstance of occlusion.Compared with existing video frequency following system, cost of the present invention is low, flexibility is high, the construction cycle is short, makes up the shortcomings such as conventional video tracking system is expensive, bulky.Machine vision technique is applied to video frequency following system, overcome that the real-time that conventional video tracking system human factor causes is poor, efficiency is low, make it become more intelligent, convenient; The video frequency following system that has solved prior art is difficult to reach the problems such as active balance at aspects such as processing speed, efficiency, cost, development difficulty, energy consumption, volumes.
accompanying drawing explanation:
Fig. 1 is video frequency following system composition frame chart of the present invention;
Fig. 2 is video image acquisition storage of the present invention and displaying principle figure;
Fig. 3 is the two 4_Port sdram controller theory diagrams of the present invention;
Fig. 4 is video image process chart of the present invention;
Fig. 5 is horizontal stage electric machine control principle block diagram of the present invention.
embodiment:
A kind of video frequency following system based on machine vision technique, it comprises camera, horizontal stage electric machine, SDRAM memory, display and The Cloud Terrace driver module and on-site programmable gate array FPGA, on-site programmable gate array FPGA is connected with camera, SDRAM memory, display and The Cloud Terrace driver module wire, and The Cloud Terrace driver module is connected with horizontal stage electric machine wire.
On-site programmable gate array FPGA comprises image capture module, multiport sdram controller, image VGA display module, video image processing module, multiport sdram controller is connected by Digital Logical Circuits with image capture module, image VGA display module and video image processing module, and multiport sdram controller is connected with SDRAM memory wire.
Multiport sdram controller adopts four ports, and four ports can meet application of the present invention.
Camera is cmos image sensor.
The Cloud Terrace driver module comprises photoelectric coupling buffer circuit, and photoelectric coupling buffer circuit is connected with drive amplification circuit lead.
Horizontal stage electric machine is to be responsible for two stepping motors that horizontal and vertical direction is rotated, and is loaded with camera above.
An implementation method for video frequency following system based on machine vision technique, it comprises the steps:
Step 1, the video image of camera is gathered and stored;
Step 2, utilize Hierarchical clustering analysis and frame differential method to detect the video image gathering, extract moving target;
Step 3, utilize centroid method to detect target position, calculate the coordinate of the target place centre of form;
The centre of form movement locus of step 4, calculating tracked target, determines the position that next moment tracked target occurs;
Step 5, calculate the side-play amount of the target centre of form with respect to monitoring display picture center in real time, drive The Cloud Terrace stepping motor, camera is moved with tracked target and rotate, realize the real-time tracking to tracked target.
The video image to camera described in step 1 gathers and stores, and first, passes through I 2c transducer configuration module drives cmos image sensor to make its acquisition of image data; Then cmos sensor data acquisition module receives data and is converted to RGB color space form; Finally by multiport sdram controller, the view data after conversion is saved to SDRAM memory.
Specifically as shown in Figure 2, after system powers on, the I that FPGA is inner integrated 2c transducer configuration module initialization cmos image sensor, cmos sensor data acquisition module is caught the realtime image data from cmos image sensor, Bayer Pattern turns rgb format module the Bayer Pattern format-pattern data of collection is converted to RGB color space form, and multiport sdram controller is saved in the rgb format view data after conversion in SDRAM memory; The view data of buffer memory is given image VGA display module by an output port of multiport sdram controller simultaneously, carry out image demonstration by the display being connected with image VGA display module, another output port is given detection, tracking, the location that video image processing module is carried out image preliminary treatment and moving target.
In this video frequency following system, camera adopts the TRDB-D5M camera that Terasic company provides.TRDB-D5M camera has 500W pixel, can be connected with FPGA mainboard by general input/output port GPIO.In addition, TRDM-D5M supports manual focusing, can avoid occurring larger difference in different distance hypograph quality.
Multiport sdram controller is the multiple FIFO(First in First out that utilize the Resources on Chip of FPGA to open up) to realize as vedio data read-write cache, it can carry out from multiple buffer memorys to SDRAM memory chip the access of view data.The present invention completes video image with two SDRAM memory chips and two 4_Port sdram controllers and shows and video image processing, as shown in Figure 3, particularly, the real time video image data that cmos image sensor is collected are divided into two parts, a part is write green channel G data that FIFO1 writes 10 red channel R and high 5 in SDRAM 1 memory chip by 4_Port sdram controller u1's, another part writes low 5 green channel G and 10 blue channel B data in SDRAM 2 memory chips by the FIFO1 that writes of 4_Port sdram controller u2, image VGA display module is read reading FIFO1 and carrying out the demonstration of image of FIFO1 and 4_Port sdram controller u2 by what read 4_Port sdram controller u1, video image processing module completes real-time video image processing by the FIF02 that reads that reads FIF02 and 4_Port sdram controller u2 that reads 4_Port sdram controller u1.
Image VGA display module is mainly the control signal in order to produce display, and the demonstration data in display buffer are outputed on display according to the requirement of display timing generator; In the present invention, VGA shows the 16 stitch D-SUB connectors that utilize support VGA output, the Cyclone II Series FPGA chip of altera corp directly provides VGA synchronizing signal, three 10 high-speed video transducers in modulus conversion chip ADV7123 are used as analog data signal (R simultaneously, G and B) generator, these circuit combine the resolution that high energy supports that pixel is 1600 × 1200 and show.
Extraction moving target described in step 2; Its method is: the two width images that first read be separated by a frame or several frames video sequence from SDRAM memory do difference operation, draw the absolute value of two two field picture luminance differences, judge whether this absolute value is greater than threshold value, is to determine in image sequence, to have object of which movement; Define after moving target, the video image reading is carried out to medium filtering and again gray value is carried out to Hierarchical clustering analysis, finally realize target in video image rim detection by the result of adaptive Threshold Analysis hierarchical clustering.Described realizes target in video image rim detection by Hierarchical clustering analysis, and its method comprises the steps:
1, obtain gray level image, carry out medium filtering, template is set;
2, obtain template center's pixel and its absolute value of all pixel gray value differences around
Figure 422377DEST_PATH_IMAGE001
, and be divided into bunch;
3, will
Figure 349936DEST_PATH_IMAGE001
with the threshold value of setting
Figure 506111DEST_PATH_IMAGE003
relatively, the hierarchical clustering that carries out first division:
1) if , will
Figure 849685DEST_PATH_IMAGE005
be classified as
Figure 23177DEST_PATH_IMAGE006
bunch;
2) if
Figure 971542DEST_PATH_IMAGE007
, will set to 0 and be classified as
Figure 973313DEST_PATH_IMAGE008
bunch;
4, will bunch and element in bunch merges, and obtains the hierarchical clustering of cohesion bunch,
Figure 702749DEST_PATH_IMAGE009
element in bunch is ;
5, ask the gray value of all pixels in bunch the mean value of gray value ;
6, right
Figure 246994DEST_PATH_IMAGE009
element in bunch carries out ascending sort, obtains new sequence to be
Figure 24457DEST_PATH_IMAGE012
;
7, asking the average of the 3rd and the 4th 's pixel gray value is adaptive threshold value
Figure 505117DEST_PATH_IMAGE013
;
8, will
Figure 233776DEST_PATH_IMAGE009
the gray value of all pixels in bunch
Figure 90874DEST_PATH_IMAGE010
the mean value of gray value
Figure 785160DEST_PATH_IMAGE011
with adaptive threshold value
Figure 374405DEST_PATH_IMAGE013
relatively, carry out the hierarchical clustering of second division:
1) if
Figure 419721DEST_PATH_IMAGE014
, judge that the point in this bunch is marginal point;
2) if
Figure 752613DEST_PATH_IMAGE015
, judge that the point in this bunch is not marginal point.
The centroid method that utilizes described in step 3 positions detecting target, calculates the coordinate of the target place centre of form, and its concrete grammar comprises the steps: 1, with a calculator
Figure 301406DEST_PATH_IMAGE016
add up the bianry image after target detection intermediate value is the number of the pixel of " 1 "; 2, with two registers
Figure 594164DEST_PATH_IMAGE018
storage pixel value is the coordinate accumulated value of 1 pixel; 3, with two counters
Figure 527485DEST_PATH_IMAGE019
with
Figure 632582DEST_PATH_IMAGE020
represent the coordinate figure of current pixel; 4,, after a two field picture has scanned, can obtain the centre of form abscissa of present image
Figure 891525DEST_PATH_IMAGE021
for
Figure 317959DEST_PATH_IMAGE022
, centre of form ordinate
Figure 320550DEST_PATH_IMAGE023
for
Figure 843935DEST_PATH_IMAGE024
, specifically realize by following formula, right
Figure DEST_PATH_IMAGE035
in the two dimensional image of (long * is wide), the leaching process of the moving target centre of form is as shown in the formula (1), (2):
Figure 945883DEST_PATH_IMAGE036
(1)
Figure DEST_PATH_IMAGE037
(2)
Wherein, for the pretreated bianry image of process image, its value is 0 or 1,
Figure DEST_PATH_IMAGE039
,
Figure 102113DEST_PATH_IMAGE040
for pixel
Figure DEST_PATH_IMAGE041
abscissa and ordinate.When FPGA realizes moving target centroid calculation, with a calculator
Figure 417688DEST_PATH_IMAGE016
add up
Figure 18433DEST_PATH_IMAGE042
intermediate value is the number of the pixel of " 1 ", with two registers
Figure 419459DEST_PATH_IMAGE018
storage pixel value is the coordinate accumulated value of 1 pixel, with two counters
Figure DEST_PATH_IMAGE043
with
Figure 763852DEST_PATH_IMAGE044
represent the coordinate figure of current pixel.In the time that the value reading is 1, carry out formula (3), (4), (5) calculating.
Figure DEST_PATH_IMAGE045
(3)
Figure 199513DEST_PATH_IMAGE046
(4)
Figure DEST_PATH_IMAGE047
(5)
After a frame image data has scanned, can obtain the centre of form of present image, shown in (6), (7).
Figure 147658DEST_PATH_IMAGE048
(6)
Figure DEST_PATH_IMAGE049
(7)
The centre of form of moving target
Figure 239242DEST_PATH_IMAGE050
after determining, adopt the Kalman Filtering Analysis tracked target centre of form
Figure 121747DEST_PATH_IMAGE050
movement locus, predict the position coordinates that next moment tracking target centre of form may occur, the generation of tracking target loss situation while avoiding tracked target to block.
The movement locus process of the Kalman Filtering Analysis tracked target centre of form is as follows:
1) init state transfer matrix , measure matrix
Figure 525102DEST_PATH_IMAGE025
, error covariance
Figure 900719DEST_PATH_IMAGE026
, the initial position to target, velocity amplitude , noise covariance
Figure 731589DEST_PATH_IMAGE028
,
Figure 782722DEST_PATH_IMAGE029
calculate.
2) by the state variable of target previous moment
Figure 707953DEST_PATH_IMAGE030
and observational variable this moment
Figure 932261DEST_PATH_IMAGE031
bring kalman filtering fundamental equation into, upgrade filter gain
Figure 695555DEST_PATH_IMAGE032
, error covariance
Figure 917589DEST_PATH_IMAGE033
, and future position
Figure 2219DEST_PATH_IMAGE034
.
3) prior estimate using the target location of prediction as next moment, upgrades kalman filter state, carries out loop iteration.
It is exactly to control transfer and the storage of data that the FPGA of Kalman filtering realizes essence, and the computings such as adding, subtract, take advantage of and invert of realization matrix, and the transfer control of data needs finite state machine FSM to realize.In this system, kalman filtering is resolved in FPGA and is completed, RAM and ROM use embedded hardware RAM memory, the wherein intermediate object program of the temporary every step of RAM, fixed coefficient in ROM memory filter, as observing matrix, noise factor matrix etc., the process of resolving of kalman filter mainly utilizes the resources such as embedded stone multiplier to complete.
In sum: video image processing module handling process (as shown in Figure 4): video image processing module is passed through multiport sdram controller from SDRAM memory reading images, by image preliminary treatment, target detection, after target localization and centroid point are confirmed, carry out target following, finally realize cradle head control, cradle head control is undertaken by pwm control signal.
As shown in Figure 5, wherein The Cloud Terrace is made up of two 57 type two-phase 4 line composite stepper motors cradle head control principle of the present invention, realizes respectively the rotation of camera horizontal and vertical direction, to guarantee the tracked target center in monitored picture all the time.The operation of stepping motor need to have pulse signal and driver, FPGA is according to the output control Signal Regulation of video image processing, drive driver to send to each phase winding of stepping motor by PWM generative circuit output PWM Electric Machine Control pulse signal, can Driving Stepping Motor operation.In this example, the design of driver is take Beijing THB7128 of Jie Kelida mechanical & electrical technology Co., Ltd chip as core, THB7128 is that the two-phase stepping motor of a specialty drives chip, its inner integrated photoelectric coupling and drive amplification circuit etc., coordinate simple peripheral circuit can realize high-performance, many segmentations, large driven current density, be applicable to the driving of 42,57 type two-phases, four phase composite stepper motors.
The essence that The Cloud Terrace drives camera to rotate is to calculate the tracked target centre of form
Figure 30218DEST_PATH_IMAGE050
with respect to monitoring display picture center
Figure DEST_PATH_IMAGE051
motion vector
Figure 149484DEST_PATH_IMAGE052
, when
Figure DEST_PATH_IMAGE053
, exceed setting range
Figure DEST_PATH_IMAGE055
,
Figure 816144DEST_PATH_IMAGE056
time, produce respectively the PWM Electric Machine Control pulse signal of X, Y-direction, and give driver by this signal, by drive amplification circuit, deliver to stepping motor, be specially:
When time, The Cloud Terrace stepping motor drives camera to turn right;
When
Figure 319937DEST_PATH_IMAGE058
time, The Cloud Terrace stepping motor drives camera to turn left;
When
Figure DEST_PATH_IMAGE059
time, The Cloud Terrace stepping motor drives on camera and turns;
When
Figure 809823DEST_PATH_IMAGE060
time, The Cloud Terrace stepping motor drives under camera and turns.
Realize camera real-time tracking target, target is monitored in real time.

Claims (10)

1. the video frequency following system based on machine vision technique, it comprises camera, horizontal stage electric machine, SDRAM memory, display and The Cloud Terrace driver module and on-site programmable gate array FPGA, it is characterized in that: on-site programmable gate array FPGA is connected with camera, SDRAM memory, display and The Cloud Terrace driver module wire, and The Cloud Terrace driver module is connected with horizontal stage electric machine wire.
2. a kind of video frequency following system based on machine vision technique according to claim 1, is characterized in that: multiport sdram controller is four ports.
3. a kind of video frequency following system based on machine vision technique according to claim 1, it is characterized in that: on-site programmable gate array FPGA comprises image capture module, multiport sdram controller, image VGA display module, video image processing module, multiport sdram controller is connected by Digital Logical Circuits with image capture module, image VGA display module and video image processing module, and multiport sdram controller is connected with SDRAM memory wire.
4. an implementation method for the video frequency following system based on machine vision technique, it comprises the steps:
Step 1, the video image of camera is gathered and stored;
Step 2, utilize Hierarchical clustering analysis and frame differential method to detect the video image gathering, extract moving target;
Step 3, utilize centroid method to detect target position, calculate the coordinate of the target place centre of form;
The centre of form movement locus of step 4, calculating tracked target, determines the position that next moment tracked target occurs;
Step 5, calculate the side-play amount of the target centre of form with respect to monitoring display picture center in real time, drive The Cloud Terrace stepping motor, camera is moved with tracked target and rotate, realize the real-time tracking to tracked target.
5. the implementation method of a kind of video frequency following system based on machine vision technique according to claim 4, is characterized in that: the video image to camera described in step 1 gathers and stores, and first, passes through I 2c transducer configuration module drives cmos image sensor to make its acquisition of image data; Then cmos sensor data acquisition module receives view data and is converted to RGB color space form; Finally by multiport sdram controller, the view data after conversion is saved to SDRAM memory.
6. the implementation method of a kind of video frequency following system based on machine vision technique according to claim 4, it is characterized in that: the extraction moving target method described in step 2 is: the two width images that 1, read be separated by a frame or several frames video sequence from SDRAM memory do difference operation, draw the absolute value of two two field picture luminance differences, judging whether this absolute value is greater than threshold value, is to determine in image sequence, to have object of which movement; 2, determine in image sequence and have after moving target, the video image reading is carried out to medium filtering and again gray value is carried out to Hierarchical clustering analysis, finally carry out realize target rim detection by the result of adaptive Threshold Analysis hierarchical clustering.
7. the implementation method of a kind of video frequency following system based on machine vision technique according to claim 6, is characterized in that: described carrys out realize target rim detection by Hierarchical clustering analysis, and its method comprises the steps:
1, obtain gray level image, carry out medium filtering, template is set;
2, obtain template pixel center point and its absolute value of all pixel gray value differences around
Figure 2014100940163100001DEST_PATH_IMAGE002
, and be divided into
Figure 2014100940163100001DEST_PATH_IMAGE004
bunch;
3, will
Figure 552380DEST_PATH_IMAGE002
with given threshold value relatively, the hierarchical clustering that carries out first division:
1) if
Figure 2014100940163100001DEST_PATH_IMAGE008
, will
Figure 2014100940163100001DEST_PATH_IMAGE010
putting 1 is classified as
Figure 2014100940163100001DEST_PATH_IMAGE012
bunch;
2) if
Figure 2014100940163100001DEST_PATH_IMAGE014
, will
Figure 718788DEST_PATH_IMAGE010
set to 0 and be classified as
Figure DEST_PATH_IMAGE016
bunch;
4, will
Figure 934743DEST_PATH_IMAGE012
bunch and element in bunch merges, and the hierarchical clustering that obtains cohesion is
Figure DEST_PATH_IMAGE018
bunch, wherein element
Figure DEST_PATH_IMAGE020
;
5, ask the gray value of all pixels in bunch
Figure 842153DEST_PATH_IMAGE020
the mean value of gray value ;
6, right
Figure 46870DEST_PATH_IMAGE018
element in bunch carries out ascending sort, obtains new sequence to be
Figure DEST_PATH_IMAGE024
;
7, asking the average of the 3rd and the 4th 's pixel gray value is adaptive threshold value
Figure DEST_PATH_IMAGE026
;
8, will
Figure 676129DEST_PATH_IMAGE018
the gray value of all pixels in bunch
Figure 282691DEST_PATH_IMAGE020
with adaptive threshold value
Figure 276055DEST_PATH_IMAGE026
relatively, carry out the hierarchical clustering of second division:
1) if
Figure DEST_PATH_IMAGE028
, judge that the point in this bunch is marginal point;
2) if
Figure DEST_PATH_IMAGE030
, judge that the point in this bunch is not marginal point.
8. the implementation method of a kind of video frequency following system based on machine vision technique according to claim 4, it is characterized in that: the centroid method that utilizes described in step 3 positions detecting target, calculate the coordinate of the target place centre of form, its concrete grammar comprises the steps: 1, with a calculator
Figure DEST_PATH_IMAGE032
add up the bianry image after target detection
Figure DEST_PATH_IMAGE034
intermediate value is the number of the pixel of " 1 "; 2, with two registers
Figure DEST_PATH_IMAGE036
storage pixel value is the coordinate accumulated value of 1 pixel; 3, with two counters
Figure DEST_PATH_IMAGE038
with
Figure DEST_PATH_IMAGE040
represent the coordinate figure of current pixel; 4,, after a two field picture has scanned, can obtain the centre of form abscissa of present image
Figure DEST_PATH_IMAGE042
for
Figure DEST_PATH_IMAGE044
, centre of form ordinate
Figure DEST_PATH_IMAGE046
for .
9. the implementation method of a kind of video frequency following system based on machine vision technique according to claim 4, it is characterized in that: the centre of form movement locus of the calculating tracked target described in step 4, determine the position that next tracked target occurs in moment, its method comprises the steps: 1, init state transfer matrix
Figure 155018DEST_PATH_IMAGE012
, measure matrix
Figure DEST_PATH_IMAGE050
, error covariance
Figure DEST_PATH_IMAGE052
, the initial position to target, velocity amplitude
Figure DEST_PATH_IMAGE054
, noise covariance
Figure DEST_PATH_IMAGE056
,
Figure DEST_PATH_IMAGE058
calculate; 2, by the state variable of target previous moment
Figure DEST_PATH_IMAGE060
and observational variable this moment
Figure DEST_PATH_IMAGE062
bring kalman filtering fundamental equation into, upgrade filter gain
Figure DEST_PATH_IMAGE064
, error covariance , and future position
Figure DEST_PATH_IMAGE068
; 3, the prior estimate using the target location of prediction as next moment, upgrades kalman filter state, carries out loop iteration.
10. the implementation method of a kind of video frequency following system based on machine vision technique according to claim 4, it is characterized in that: the real-time calculating target centre of form described in step 5 is with respect to the side-play amount at monitoring display picture center, drive The Cloud Terrace stepping motor, camera is moved with tracked target and rotate, realize the real-time tracking to tracked target; Its method is: calculate in real time the motion vector of the tracked target centre of form with respect to monitoring display picture center
Figure DEST_PATH_IMAGE070
, when
Figure DEST_PATH_IMAGE072
,
Figure DEST_PATH_IMAGE074
exceed setting range
Figure DEST_PATH_IMAGE076
, time, produce respectively the PWM Electric Machine Control pulse signal of X, Y-direction, and give motor driver by this signal, pass through drive circuit, deliver to each phase winding of stepping motor, realize level and the vertical direction of The Cloud Terrace and rotate, make the tracked target center in monitored picture all the time.
CN201410094016.3A 2014-03-14 2014-03-14 Video tracking system and realizing method based on machine vision technology Pending CN103826105A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410094016.3A CN103826105A (en) 2014-03-14 2014-03-14 Video tracking system and realizing method based on machine vision technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410094016.3A CN103826105A (en) 2014-03-14 2014-03-14 Video tracking system and realizing method based on machine vision technology

Publications (1)

Publication Number Publication Date
CN103826105A true CN103826105A (en) 2014-05-28

Family

ID=50760901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410094016.3A Pending CN103826105A (en) 2014-03-14 2014-03-14 Video tracking system and realizing method based on machine vision technology

Country Status (1)

Country Link
CN (1) CN103826105A (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104048606A (en) * 2014-06-25 2014-09-17 燕山大学 Machine vision non-contact automatic precise measuring device for size of railway turnout steel rail part
CN105317310A (en) * 2014-07-14 2016-02-10 广州天赋人财光电科技有限公司 Platform screen door and train door anti-pinch ultralimit monitoring system and method based on machine vision
CN105872485A (en) * 2016-06-06 2016-08-17 贵州大学 Image compression and transmission device and method based on FPGA for intelligent transportation system
CN106101534A (en) * 2016-06-17 2016-11-09 上海理工大学 Curricula based on color characteristic records the control method from motion tracking photographic head and photographic head
CN106156764A (en) * 2016-08-25 2016-11-23 四川泰立科技股份有限公司 Realize optical tracking system and the control method thereof followed the tracks of at a high speed
CN106982340A (en) * 2016-11-26 2017-07-25 顺德职业技术学院 A kind of method and system of the lower target video storage of machine vision tracking
CN107105159A (en) * 2017-04-13 2017-08-29 山东万腾电子科技有限公司 The real-time detecting and tracking system and method for embedded moving target based on SoC
CN107255641A (en) * 2017-06-06 2017-10-17 西安理工大学 A kind of method that Machine Vision Detection is carried out for GRIN Lens surface defect
CN107341760A (en) * 2017-06-27 2017-11-10 北京计算机技术及应用研究所 A kind of low-altitude target tracking system based on FPGA
CN107370909A (en) * 2017-07-11 2017-11-21 中国重型机械研究院股份公司 A kind of plate straightening roll autocontrol method based on machine vision technique
CN107516296A (en) * 2017-07-10 2017-12-26 昆明理工大学 A kind of moving object detection tracking system and method based on FPGA
CN107992099A (en) * 2017-12-13 2018-05-04 福州大学 A kind of target sport video tracking and system based on improvement frame difference method
CN108122026A (en) * 2017-12-19 2018-06-05 中国人民解放军空军工程大学 The accurate tracking of attack vehicle holder
CN108372130A (en) * 2018-03-20 2018-08-07 华南理工大学 A kind of target locating, sorting system and its implementation based on FPGA image procossings
CN109191524A (en) * 2018-08-29 2019-01-11 成都森和电子科技有限公司 Infrared target real-time detecting system and detection method based on FPGA
CN109451284A (en) * 2019-01-08 2019-03-08 哈尔滨理工大学 A kind of vision Tracking monitoring system based on FPGA
CN109902588A (en) * 2019-01-29 2019-06-18 北京奇艺世纪科技有限公司 A kind of gesture identification method, device and computer readable storage medium
CN110253588A (en) * 2019-08-05 2019-09-20 江苏科技大学 A kind of New Type of Robot Arm dynamic grasping system
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN113194249A (en) * 2021-04-22 2021-07-30 中山大学 Moving object real-time tracking system and method based on camera
CN113378782A (en) * 2021-07-01 2021-09-10 应急管理部天津消防研究所 Vehicle-mounted fire identification and automatic tracking method
WO2021208259A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Gimbal driving method and device, and handheld camera
CN113808349A (en) * 2021-08-30 2021-12-17 国网山东省电力公司金乡县供电公司 Anti-theft alarm system and method for power supply cable of highway street lamp
CN115190238A (en) * 2022-06-20 2022-10-14 中国人民解放军战略支援部队航天工程大学 Ground demonstration verification system for detection and tracking of satellite-borne moving target
CN117392518A (en) * 2023-12-13 2024-01-12 南京耀宇视芯科技有限公司 Low-power-consumption visual positioning and mapping chip and method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204069175U (en) * 2014-03-14 2014-12-31 贵州大学 A kind of video frequency following system based on machine vision technique

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204069175U (en) * 2014-03-14 2014-12-31 贵州大学 A kind of video frequency following system based on machine vision technique

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘紫燕,祁佳,: "层次聚类算法的实时图像边缘检测及FPGA实现", 《红外技术》 *
姚运城: "基于FPGA的嵌入式视频监控跟踪系统", 《中国优秀硕士学位论文全文数据库》 *
张丽红,凌朝东: "基于FPGA的SOBEL边缘检测应用", 《嵌入式技术》 *
黄露,杨秀增: "《基于FPGA的两相步电机控制器设计》", 《物理、电子、机械与工程技术》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104048606A (en) * 2014-06-25 2014-09-17 燕山大学 Machine vision non-contact automatic precise measuring device for size of railway turnout steel rail part
CN105317310A (en) * 2014-07-14 2016-02-10 广州天赋人财光电科技有限公司 Platform screen door and train door anti-pinch ultralimit monitoring system and method based on machine vision
CN105872485A (en) * 2016-06-06 2016-08-17 贵州大学 Image compression and transmission device and method based on FPGA for intelligent transportation system
CN106101534A (en) * 2016-06-17 2016-11-09 上海理工大学 Curricula based on color characteristic records the control method from motion tracking photographic head and photographic head
CN106156764B (en) * 2016-08-25 2018-08-10 四川泰立科技股份有限公司 Realize the optical tracking system and its control method of high speed tracking
CN106156764A (en) * 2016-08-25 2016-11-23 四川泰立科技股份有限公司 Realize optical tracking system and the control method thereof followed the tracks of at a high speed
CN106982340A (en) * 2016-11-26 2017-07-25 顺德职业技术学院 A kind of method and system of the lower target video storage of machine vision tracking
CN106982340B (en) * 2016-11-26 2023-02-28 顺德职业技术学院 Method and system for storing target video under machine vision tracking
CN107105159A (en) * 2017-04-13 2017-08-29 山东万腾电子科技有限公司 The real-time detecting and tracking system and method for embedded moving target based on SoC
CN107105159B (en) * 2017-04-13 2020-01-07 山东万腾电子科技有限公司 Embedded moving target real-time detection tracking system and method based on SoC
CN107255641A (en) * 2017-06-06 2017-10-17 西安理工大学 A kind of method that Machine Vision Detection is carried out for GRIN Lens surface defect
CN107255641B (en) * 2017-06-06 2019-11-22 西安理工大学 A method of Machine Vision Detection is carried out for self-focusing lens surface defect
CN107341760A (en) * 2017-06-27 2017-11-10 北京计算机技术及应用研究所 A kind of low-altitude target tracking system based on FPGA
CN107516296A (en) * 2017-07-10 2017-12-26 昆明理工大学 A kind of moving object detection tracking system and method based on FPGA
CN107370909A (en) * 2017-07-11 2017-11-21 中国重型机械研究院股份公司 A kind of plate straightening roll autocontrol method based on machine vision technique
CN107992099A (en) * 2017-12-13 2018-05-04 福州大学 A kind of target sport video tracking and system based on improvement frame difference method
CN108122026A (en) * 2017-12-19 2018-06-05 中国人民解放军空军工程大学 The accurate tracking of attack vehicle holder
CN108372130A (en) * 2018-03-20 2018-08-07 华南理工大学 A kind of target locating, sorting system and its implementation based on FPGA image procossings
CN109191524A (en) * 2018-08-29 2019-01-11 成都森和电子科技有限公司 Infrared target real-time detecting system and detection method based on FPGA
CN109451284A (en) * 2019-01-08 2019-03-08 哈尔滨理工大学 A kind of vision Tracking monitoring system based on FPGA
CN109902588A (en) * 2019-01-29 2019-06-18 北京奇艺世纪科技有限公司 A kind of gesture identification method, device and computer readable storage medium
CN110253588A (en) * 2019-08-05 2019-09-20 江苏科技大学 A kind of New Type of Robot Arm dynamic grasping system
WO2021208259A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Gimbal driving method and device, and handheld camera
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN113194249A (en) * 2021-04-22 2021-07-30 中山大学 Moving object real-time tracking system and method based on camera
CN113378782A (en) * 2021-07-01 2021-09-10 应急管理部天津消防研究所 Vehicle-mounted fire identification and automatic tracking method
CN113808349A (en) * 2021-08-30 2021-12-17 国网山东省电力公司金乡县供电公司 Anti-theft alarm system and method for power supply cable of highway street lamp
CN115190238A (en) * 2022-06-20 2022-10-14 中国人民解放军战略支援部队航天工程大学 Ground demonstration verification system for detection and tracking of satellite-borne moving target
CN117392518A (en) * 2023-12-13 2024-01-12 南京耀宇视芯科技有限公司 Low-power-consumption visual positioning and mapping chip and method thereof
CN117392518B (en) * 2023-12-13 2024-04-09 南京耀宇视芯科技有限公司 Low-power-consumption visual positioning and mapping chip and method thereof

Similar Documents

Publication Publication Date Title
CN103826105A (en) Video tracking system and realizing method based on machine vision technology
US9041834B2 (en) Systems and methods for reducing noise in video streams
CN103049879B (en) A kind of Infrared images pre-processing method based on FPGA
CN103198477B (en) Apple fruitlet bagging robot visual positioning method
CN108171734B (en) ORB feature extraction and matching method and device
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN102982537B (en) A kind of method and system detecting scene change
CN104978728A (en) Image matching system of optical flow method
CN105160703A (en) Optical flow computation method using time domain visual sensor
CN107133610B (en) Visual detection and counting method for traffic flow under complex road conditions
CN103942843A (en) Fairway and ship three-dimensional model dynamic presenting method based on video
CN110706291A (en) Visual measurement method suitable for three-dimensional trajectory of moving object in pool experiment
CN204069175U (en) A kind of video frequency following system based on machine vision technique
CN107238727A (en) Photoelectric tachometric transducer and detection method based on dynamic visual sensor chip
CN104700385A (en) Binocular vision positioning device based on FPGA
US8718402B2 (en) Depth generation method and apparatus using the same
CN110866906B (en) Three-dimensional culture human myocardial cell pulsation detection method based on image edge extraction
Xu et al. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries
CN103707300A (en) Manipulator device
CN116429082A (en) Visual SLAM method based on ST-ORB feature extraction
CN112884803A (en) Real-time intelligent monitoring target detection method and device based on DSP
CN112634305A (en) Infrared vision odometer implementation method based on edge feature matching
CN115205793B (en) Electric power machine room smoke detection method and device based on deep learning secondary confirmation
Lv et al. Design and research on vision system of apple harvesting robot
CN204904010U (en) Multiaxis motor control system based on FPGA vision measuring technique

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140528

RJ01 Rejection of invention patent application after publication