CN109035306A - Moving-target automatic testing method and device - Google Patents
Moving-target automatic testing method and device Download PDFInfo
- Publication number
- CN109035306A CN109035306A CN201811063853.4A CN201811063853A CN109035306A CN 109035306 A CN109035306 A CN 109035306A CN 201811063853 A CN201811063853 A CN 201811063853A CN 109035306 A CN109035306 A CN 109035306A
- Authority
- CN
- China
- Prior art keywords
- image
- registration
- target
- moving
- benchmark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to moving target detection technique field, a kind of moving-target automatic testing method and device are provided, which comprises obtain the Optical satellite images of at least two frame different moments acquisition of image acquisition device acquisition;Using any one frame Optical satellite images at least two frame Optical satellite images as benchmark image, remaining image subject to registration in addition to benchmark image is pre-processed to obtain corresponding registration image;To benchmark image and all registration images successively calculus of differences two-by-two, at least two corresponding binary maps are obtained;Object information extraction is carried out to every binary map, the target object in every binary map is determined, and false-alarm removal is carried out to target object, obtains the moving-target in every binary map.The present invention effectively improves the real-time that moving-target detects automatically by pre-processing to acquisition image, calculus of differences, the process of false-alarm being gone to carry out automatic processing.
Description
Technical field
The present invention relates to moving target detection technique fields, in particular to a kind of moving-target automatic testing method and dress
It sets.
Background technique
Moving-target detection is to separate moving-target from background image, since Optical satellite images atural object is many and diverse, no
Image irradiation condition and shooting angle in the same time, which is all varied equal environment, to be influenced, and is brought to moving-target detection very big tired
It is difficult.The prior art generallys use the methods of multispectral image algorithm, optical flow method, but existing algorithm is since the degree of automation is low,
Cause efficiency of algorithm not high, real-time is poor.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of moving-target automatic testing method and device, by acquisition image
It is pre-processed, calculus of differences, the process progress automatic processing for going false-alarm to obtain moving-target, effectively improves moving-target certainly
The real-time of dynamic detection.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, it is applied to image processing equipment the embodiment of the invention provides a kind of moving-target automatic testing method,
Image processing equipment and image acquisition device communicate to connect, which comprises obtain at least two frames of image acquisition device acquisition not
The Optical satellite images acquired in the same time;Using any one frame Optical satellite images at least two frame Optical satellite images as benchmark
Image is pre-processed to obtain corresponding registration image to remaining image subject to registration in addition to benchmark image;To reference map
As and all registration images successively calculus of differences two-by-two, obtain at least two corresponding binary maps;Every binary map is carried out
Object information is extracted, and is determined the target object in every binary map, and carry out false-alarm removal to target object, is obtained every two-value
Moving-target in figure.
Second aspect, the embodiment of the invention also provides a kind of moving-target automatic detection device, described device includes obtaining
Module, preprocessing module, difference block and false-alarm remove module.Wherein, it obtains module and is used to obtain image acquisition device acquisition
The Optical satellite images of at least two frame different moments acquisition;Preprocessing module is used for any at least two frame Optical satellite images
One frame Optical satellite images are pre-processed to obtain as benchmark image to remaining image subject to registration in addition to benchmark image
Corresponding registration image, difference block are used to obtain at least benchmark image and all registration images successively calculus of differences two-by-two
Two corresponding binary maps, false-alarm removal module determine every two-value for carrying out object information extraction to every binary map
Target object in figure, and false-alarm removal is carried out to target object, obtain the moving-target in every binary map.
Compared with the prior art, a kind of moving-target automatic testing method and device provided in an embodiment of the present invention, firstly, image
Processing equipment obtains the Optical satellite images of at least two frame different moments acquisition of image acquisition device acquisition;Then, at least two
Any one frame Optical satellite images wait for remaining in addition to benchmark image as benchmark image in Optical satellite images described in frame
Registration image is pre-processed to obtain corresponding registration image;Next, to benchmark image and all registration images successively two
Two calculus of differences obtain at least two corresponding binary maps;Finally, carrying out object information extraction to every binary map, determine
Target object in every binary map, and false-alarm removal is carried out to target object, obtain the moving-target in every binary map.With it is existing
There is technology to compare, the embodiment of the present invention is by pre-processing acquisition image, calculus of differences, the process of false-alarm being gone to carry out automatically
Change processing, effectively improves the real-time that moving-target detects automatically.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, special embodiment below, and appended by cooperation
Attached drawing is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows the block diagram of image processing equipment provided in an embodiment of the present invention.
Fig. 2 shows moving-target automatic testing method flow charts provided in an embodiment of the present invention.
Fig. 3 be Fig. 2 shows step S103 sub-step flow chart.
Fig. 4 be Fig. 2 shows step S104 sub-step flow chart.
Fig. 5 be Fig. 2 shows step S105 sub-step flow chart.
Fig. 6 shows the block diagram of moving-target automatic detection device provided in an embodiment of the present invention.
Icon: 100- image processing equipment;101- memory;102- communication interface;103- processor;104- bus;
200- moving-target automatic detection device;201- obtains module;202- greyscale transformation module;203- preprocessing module;204- difference
Module;205- false-alarm removes module.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below
Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
Fig. 1 is please referred to, Fig. 1 shows the block diagram of image processing equipment 100 provided in an embodiment of the present invention.Image
Processing equipment 100 may be, but not limited to, and host, virtual machine, property server, virtual machine in property server etc. can mention
For having the entity of identical function or virtual server-side with the server or virtual machine.The behaviour of image processing equipment 100
It may be, but not limited to, as system, Windows system, linux system etc..Described image processing equipment 100 includes memory
101, communication interface 102, processor 103 and bus 104, the memory 101, communication interface 102 and processor 103 pass through total
Line 104 connects, and processor 103 is for executing the executable module stored in memory 101, such as computer program.
Wherein, memory 101 may include high-speed random access memory (RAM:Random Access Memory),
It may further include non-labile memory (non-volatile memory), for example, at least a magnetic disk storage.By extremely
A few communication interface 102 (can be wired or wireless) realizes the image processing equipment 100 and at least one other image
Communication connection between processing equipment 100 and External memory equipment.
Bus 104 can be isa bus, pci bus or eisa bus etc..It is only indicated with a four-headed arrow in Fig. 1, but
It is not offered as only a bus or a type of bus.
Wherein, memory 101 is for storing program, such as moving-target automatic detection device 200 shown in fig. 6.The dynamic mesh
Mark automatic detection device 200 includes that at least one can be stored in the memory in the form of software or firmware (firmware)
In 101 or the software function mould that is solidificated in the operating system (operating system, OS) of described image processing equipment 100
Block.The processor 103 executes described program after receiving and executing instruction to realize that the above embodiment of the present invention discloses dynamic
Target automatic testing method.
First embodiment
Referring to figure 2., Fig. 2 shows moving-target automatic testing method flow charts provided in an embodiment of the present invention.Processing side
Method the following steps are included:
Step S101 obtains the Optical satellite images of at least two frame different moments acquisition of image acquisition device acquisition.
In embodiments of the present invention, at least two frame Optical satellite images of different moments acquisition, on the one hand, can be at least two
Frame Optical satellite images can be used to calculate the threshold value that needs when generating binary map, on the other hand, can be according to when removing false-alarm
False-alarm is excluded according to the adjacent Optical satellite images help of acquisition time, to obtain most accurate moving-target.
Step S102 carries out greyscale transformation to Optical satellite images when Optical satellite images are multi-spectral Satellite Images,
And the image after greyscale transformation is subjected to super-resolution rebuilding, obtain image subject to registration.
In embodiments of the present invention, since multi-spectral Satellite Images are the images comprising many bands, such as there are 3 bands
Color image and full-colour image, for the ease of subsequent to the pretreated unitized of Optical satellite images, for multispectral satellite
Image, first progress greyscale transformation, greyscale transformation are to carry out greyscale transformation to multi-spectral Satellite Images by greyscale transformation function
Processing, generates corresponding gray level image;Then the gray level image is subjected to super-resolution rebuilding, super-resolution rebuilding refers to by low
Image in different resolution recovers high-definition picture, due in multi-spectral Satellite Images other spectrum pictures than full-colour image resolution ratio
It is low, it is identical with full-colour image resolution ratio therefore, it is necessary to be obtained to the gray level image progress super-resolution rebuilding after greyscale transformation
High-resolution image, finally, using the high-resolution image after super-resolution rebuilding as image subject to registration.
Step S103 is right as benchmark image using any one frame Optical satellite images at least two frame Optical satellite images
Remaining image subject to registration in addition to benchmark image is pre-processed to obtain corresponding registration image.
In embodiments of the present invention, benchmark image can be any one frame at least two frame Optical satellite images, and removing should
Remaining Optical satellite images except benchmark image are known as image subject to registration, include figure to the pretreatment that image subject to registration carries out
As gray correction and image registration, benchmark image can be the standard of gamma correction, while be also possible to image registration
Standard.In embodiments of the present invention, after determining benchmark image, firstly, on the basis of the benchmark image to image subject to registration into
Row gray correction, so that the image subject to registration after correction is consistent with the gray-tone response of the benchmark image, wherein gray-tone response is consistent
It can be gray value relatively, i.e., the difference between gray value is within a preset range;Then, figure is carried out to the image after correction
As registration, obtain that consistent with the benchmark image gray-tone response and coordinate system is consistent is registrated image.
Referring to figure 3., step S103's can also include following sub-step:
Sub-step S1031, the gray value of benchmark image, to image subject to registration carry out gamma correction, obtain with
The consistent correction image of benchmark image gray-tone response.
In embodiments of the present invention, the gray value of the same atural object in multiple images acquired under different moments illumination condition
Be it is inconsistent, in order to reduce false alarm rate, so that the moving-target detected is more acurrate, first by collected at least two frame optics
Satellite image carries out gray correction, obtains the consistent correction image of gray-tone response.
In embodiments of the present invention, as an implementation, gray correction detailed process may is that
Firstly, the average gray value avg_base and avg_toadjust of calculating benchmark image, image to be corrected.
In embodiments of the present invention, as an implementation, the average gray value of benchmark image can be by calculating base
The average value of the gray value of all pixels obtains in quasi- image, and the average gray value of image to be corrected can be to be calibrated by calculating
The average value of the gray value of all pixels obtains in image, and the formula of the average gray value of calculating benchmark image is as follows:
Avg_base=sum (cell_gbase)/totalbase,
Wherein, avg_base represents the average gray value of benchmark image, cell_gbaseRepresent a pixel in benchmark image
Gray value, sum (cell_gbase) represent the sum of all pixel gray values, total in benchmark imagebaseIt represents in benchmark image
The sum of pixel.
The formula for calculating the average gray value of image to be corrected is as follows:
Avg_toadjust=sum (cell_gtoadjust)/totaltoadjust,
Wherein, avg_toadjust represents the average gray value of image to be calibrated, cell_gtoadjustRepresent image to be calibrated
In a pixel gray value, sum (cell_gtoadjust) the sum of all pixel gray values in image to be calibrated are represented,
totaltoadjustRepresent the sum of pixel in image to be calibrated.
Then, biasing coefficient is calculated.
In embodiments of the present invention, biasing coefficient formulas may is that
Offset=avg_base-avg_toadjust,
Wherein, offset represents biasing coefficient.
It is corrected finally, treating correction image according to biasing coefficient.
It is added to obtain picture in corresponding correction image with the gray value of each pixel in image to be corrected with biasing coefficient
The gray value of member.
In embodiments of the present invention, the calculation formula for correcting the gray value of a pixel in image may is that
cell_gadjust=cell_gtoadjust+ offset,
Wherein, cell_gtoadjustRepresent a pixel, cell_g in image to be correctedadjustRepresent correction image in to
Correct cell_g in imagetoadjustThe gray value of corresponding pixel.
Sub-step S1032, benchmark image carry out image registration to the correction image and obtain corresponding registration image.
In embodiments of the present invention, registration image be according to coordinate conversion parameter, high-ranking officers' positive image is converted into and reference map
As having the image of same coordinate system, that is to say, that registration image and benchmark image coordinate system having the same, as a kind of reality
Mode is applied, the detailed process of image registration may is that.
Firstly, extracting the fisrt feature point of benchmark image and correcting the second feature point of image.
In embodiments of the present invention, fisrt feature point can be the pixel point that characterization benchmark image has specific singularity,
Second feature point can be the pixel point that characterization correction image has specific singularity, carry out the characteristic point used when image registration
Can with but be not limited to the characteristic points such as harris, surf, sift.
It is obtained between benchmark image and correction image secondly, carrying out characteristic matching to fisrt feature point and second feature point
Coordinate conversion parameter.
In embodiments of the present invention, the process of characteristic matching is special to fisrt feature point and second using Feature Correspondence Algorithm
The coordinate conversion parameter of determining benchmark image and correction image after erroneous matching is measured, weeded out to similitude between sign point
Process, coordinate conversion parameter characterize benchmark image and correct image between coordinate conversion relation, i.e., by coordinate transform ginseng
Number is it can be concluded that image A has been moved to right 2 for example, image A is benchmark image by fisrt feature point corresponding with second feature point
Pixel, again to having moved up 3 pixels, having rotated clockwise 60 degree again, at this point, image B is obtained, if using image B as wait scheme
It is exactly to determine the process of 2,3,60 these three coordinate conversion parameters to the image B process being registrated as the correction image of registration,
Pixel point according to these three coordinate conversion parameters, in any one of available image B pixel point correspondence image A.?
In the embodiment of the present invention, carry out characteristic matching algorithm can with but be not limited to flann algorithm, freak algorithm etc..
Finally, generating correction image in the coordinate system of benchmark image according to coordinate conversion parameter on the basis of benchmark image
In registration image.
In embodiments of the present invention, registration image is the coordinate system phase with benchmark image obtained according to coordinate conversion parameter
Translation, rotation has occurred although identical as correction picture size by the registration image that image registration obtains in same image
Deng transformation, when generating registration image, can be filled with fisrt feature point without matched corresponding region with 0 value.
It is corresponding to obtain at least two to benchmark image and all registration images successively calculus of differences two-by-two by step S104
Binary map.
In embodiments of the present invention, binary map refers to possible value or gray scale there are two types of each pixels on image
Grade, for example, the gray value of each pixel non-zero i.e. 1 in binary map, the area attribute that pixel value is 1 in binary map this two
It is worth possible moving-target in figure, the region that pixel value is 1 in binary map corresponding with benchmark image is may in the benchmark image
Moving-target, be possible moving-target in the registration image with the region that pixel value in the corresponding binary map of image is 1 is registrated.For
A binary map is obtained, firstly, determining two adjacent ginsengs of acquisition time according to from benchmark image and all registration images
With the image of calculus of differences;Secondly, participating in the image of calculus of differences according to this two, corresponding two-value threshold is calculated;Then,
The image that this two participate in calculus of differences is subjected to calculus of differences, obtains difference result;Finally, according to difference result and two-value threshold
Value obtains the value of each pixel in binary map.
In embodiments of the present invention, how many benchmark image and all registration images have altogether and open, and just has corresponding how many
Binary map obtains two acquisition time phases from benchmark image and all registration images every time according to the sequence of acquisition time
Adjacent image calculates corresponding two-value threshold according to this two images, then does calculus of differences to this two images and obtain difference
Operation as a result, obtaining corresponding binary map according to the result of corresponding two-value threshold and calculus of differences.For example, benchmark image and
All registration images one share the image of 3 different moments acquisition: being respectively as follows: image 1, figure according to the sequence of acquisition time
As 2, image 3, wherein image 2 is benchmark image, and image 1 and image 3 are registration image, firstly, selecting image from 3 images
1 and image 2, calculus of differences is carried out to image 1 and image 2 and obtains the corresponding binary map of image 1;Then, it is selected from 3 images
Image 2 and image 3 carry out calculus of differences to image 2 and image 3 and obtain the corresponding binary map of image 2;Finally, to image 3 and figure
As 2 progress calculus of differences obtain the corresponding binary map of image 3.
Referring to figure 4., step S104 can also include following sub-step:
It is adjacent to obtain acquisition time from benchmark image and all registration images according to preset rules by sub-step S1041
First image and the second image.
In embodiments of the present invention, the first image and the second image are the adjacent image of acquisition time, are carrying out difference fortune
When calculation, the first image is the minuend for participating in calculus of differences, and the second image is the subtrahend for participating in calculus of differences.Implement in the present invention
Example in, due to obtain binary map with to moving-target carry out Preliminary detection method can with but be not limited to frame difference method and background subtraction.
Therefore, obtain temporally adjacent the first image and the second image method can with but be not limited to following two:
(1) method for obtaining temporally adjacent the first image and the second image corresponding with frame difference method may is that
Firstly, successively obtaining acquisition time from benchmark image and all registration images according to the sequence of acquisition time
Two adjacent images, and using the image of acquisition time morning in two adjacent images of acquisition time as the first image, it will adopt
Collect the image in evening time as the second image.
In embodiments of the present invention, it is successively obtained from benchmark image and all registration images according to acquisition time sequencing
Two images that acquisition time is adjacent are taken, for example, benchmark image and all registration images one share the figure of 3 different moments acquisition
Picture: being respectively as follows: image 1, image 2, image 3 according to the sequence of acquisition time, and wherein image 2 is benchmark image, 1 He of image
Image 3 is registration image, firstly, selecting image 1 and image 2, acquisition time the adopting earlier than image 2 of image 1 from 3 images
Collect the time, therefore, image 1 is the first image, and image 2 is the second image;Secondly, image 2 and image 3 are selected from 3 images,
The acquisition time of image 2 is earlier than the acquisition time of image 3, and therefore, image 2 is the first image, and image 3 is the second image.
Secondly, by the first image of conduct of acquisition time the latest in benchmark image and all registration images, it will be with the first figure
As the adjacent image of acquisition time is as the second image.
In embodiments of the present invention, when getting acquisition time image the latest in benchmark image and all registration images,
Due to the image not than the image acquisition time later, and the embodiment of the present invention requires in benchmark image and all registration images
Each image has corresponding binary map, therefore, will be with first figure using the image of acquisition time the latest as the first image
As the adjacent image of acquisition time is as the second image, carries out calculus of differences and obtain adopting with benchmark image and all be registrated in image
Collect the corresponding binary map of image of time the latest, for example, benchmark image and all registration images one share 3 different moments acquisitions
Image: be respectively as follows: image 1, image 2, image 3 according to the sequence of acquisition time, wherein image 2 be benchmark image, figure
Picture 1 and image 3 are registration image, carry out calculus of differences to image 1 and image 2 and obtain the corresponding binary map of image 1, to image 2
Calculus of differences, which is carried out, with image 3 obtains the corresponding binary map of image 2, at this point, only image 1 and image 2 has corresponding binary map,
The corresponding binary map of image 3 can be obtained by carrying out calculus of differences to image 3 and image 2.
(2) method for obtaining temporally adjacent the first image and the second image corresponding with background subtraction may is that
Firstly, benchmark image and all registration images, which carry out Background Reconstruction, obtains background image.
In embodiments of the present invention, Background Reconstruction is that benchmark image and all registration images utilize background modeling algorithm
Rebuild benchmark image and it is all registration image in background image, background modeling algorithm can with but be not limited to statistics background modeling
Algorithm, Gaussian Background modeling algorithm etc..
Secondly, successively using benchmark image and all registration images as the first image, using background image as the second image.
In embodiments of the present invention, an image is taken out from benchmark image and all registration images every time as the first figure
First image and the second image are carried out calculus of differences, it is corresponding to obtain the first image by picture using background image as the second image
Binary map.
Sub-step S1042 calculates two-value threshold according to the gray value of the second image and the first image.
In embodiments of the present invention, gray value includes first the second gray value of sum of the grayscale values, wherein the first gray value is the
The average value of the gray value for the pixel that two images are changed relative to the first image grayscale;Second gray value is that the second image is opposite
In the average value of the gray value of the unchanged pixel of the first image grayscale, two-value threshold be the first gray value and the second gray value it
Absolute value of the difference.
Pixel in pixel and the second image in first image is carried out calculus of differences, obtains difference by sub-step S1043
As a result.
In embodiments of the present invention, calculus of differences is by pixel and the pixel phase corresponding in the second image in the first image
Subtract, difference result is the difference between the two.
Sub-step S1044 will binary map corresponding with the first image when difference result is more than or equal to two-value threshold
Corresponding pixel sets the first preset value as in.
In embodiments of the present invention, the first preset value can be 1, represent in binary map in the two states of pixel wherein
A kind of state.
Sub-step S1045 will be corresponding in bianry image corresponding with the first image when difference result is less than two-value threshold
Pixel set the second preset value.
In embodiments of the present invention, the second preset value can be 0, represent in binary map in the two states of pixel wherein
A kind of state.
In embodiments of the present invention, as an implementation, when using frame difference method, according to acquisition time sequencing
Two adjacent images of acquisition time are successively obtained from benchmark image and all registration images, when the first image can be acquisition
Between in adjacent two images acquisition time morning image, the second image can be to be acquired in two adjacent images of acquisition time
The image in evening time, the first image can also be the image of acquisition time the latest in benchmark image and all registration images, second
Image is the image adjacent with first image acquisition time, and the first image is in pixel and the second image in the first image
Pixel, which carries out calculus of differences, can be used following formula:
Wherein, Dn(x, y) represent in binary map with F in the first imagen(x, y) corresponding pixel value, Fn(x, y) represents
Coordinate points are the pixel value of (x, y), F in one imagen+1(x, y) represent in the second image with Fn(x, y) corresponding pixel value, TaGeneration
The corresponding two-value threshold of table frame difference method.
In embodiments of the present invention, as an implementation, when using background subtraction, the first image is successively from base
The image obtained in quasi- image and all registration images, the second image is background image, pixel and the second figure in the first image
Pixel as in, which carries out calculus of differences, can be used following formula:
Wherein, Dn(x, y) represent in binary map with F in the first imagen(x, y) corresponding pixel value, Fn(x, y) represents
In one image coordinate points be (x, y) pixel value, B (x, y) represent in background image with F in the first imagen(x, y) corresponding picture
Member value, TbRepresent the corresponding two-value threshold of background subtraction.
It should be noted that obtaining at least two pairs to benchmark image and all registration images successively calculus of differences two-by-two
The method for the binary map answered, which may also is that, successively obtains two adjacent images of acquisition time according to acquisition time sequencing,
The image of the image of acquisition time morning in two images and acquisition time evening is subjected to calculus of differences respectively, will be adopted in two images
The image and the image of acquisition time morning for collecting evening time carry out calculus of differences and obtain two binary maps, and the will be removed in all binary maps
One successively takes intersection to obtain corresponding binary map as correspondence image two-by-two with remaining binary map outside last binary map
Final binary map, for example, image 1 and image 2 make the difference according to the image 1 of acquisition time sequence, image 2, image 3, image 4
Partite transport, which calculates to obtain binary map 1, image 2 and image 1 and does calculus of differences, to be obtained binary map 2, image 2 and image 3 and does calculus of differences obtaining
Calculus of differences is done to binary map 3, image 3 and image 2 to obtain binary map 4, image 3 and image 4 and do calculus of differences obtaining binary map
5, image 4 and image 3 do calculus of differences and obtain binary map 6, and first binary map is binary map 1, are the corresponding two-values of image 1
Figure, last binary map is binary map 6, is the corresponding binary map of image 4, binary map 2 and the intersection of binary map 3 is taken to obtain figure
As 2 corresponding binary maps, the intersection of binary map 4 and binary map 5 is taken to obtain the corresponding binary map of image 3.
Step S105 carries out object information extraction to every binary map, determines the target object in every binary map, and
False-alarm removal is carried out to target object, obtains the moving-target in every binary map.
In embodiments of the present invention, information extraction, which can be, identifies object using connected domain algorithm, calculates every an object matter
The information such as heart coordinate, area, boundary rectangle.Target object in binary map refers in binary map corresponding with the information extracted
Region.False-alarm refers to the non-moving-target for being mistaken for moving-target, and the non-moving-target in target object is removed, can be improved dynamic
The removal process of the accuracy rate of target detection, false-alarm may is that firstly, according to default screening conditions, will be quiet in target object
State object screens out, secondly, not existing simultaneously the binary map adjacent in acquisition time for screening out in the target object after static object
In target object removal, obtain moving-target.
Fig. 6 is please referred to, step S105 can also include following sub-step:
Sub-step S1051 screens out condition according to default, primary election is obtained after the static object in target object is screened out and moves mesh
Mark.
In embodiments of the present invention, it presets the condition that screens out and can be to reduce using length-width ratio constraint and generate because of environmental change
The linear false-alarm such as house, road edge removes the not blocky false-alarm in normal moving-target magnitude range using area-constrained, completely
It is usually static object that foot, which is preset and screens out the object of condition,.
Sub-step S1052 will not exist simultaneously the primary election moving-target in the adjacent binary map of acquisition time and go as false-alarm
It removes, finally obtains the moving-target in corresponding binary map.
In embodiments of the present invention, because moving-target move distance is not too large in a short time, will not generally leave into
As region, therefore same target object should exist in pairs in two adjacent moment images, comprehensively consider difference in areas, gray value differences,
The targets similarity measure such as centroid distance successively matches two adjacent moment moving-targets, will not find matched moving-target view
For false-alarm removal, moving-target is finally obtained, for example, according to the image 1 of acquisition time sequence, image 2, wherein first in image 1
Selecting moving-target includes target 1, target 2, target 3, and the primary election moving-target in image 2 includes target 1, target 3, target 4, then image
Target 2 is false-alarm in 1, and the moving-target in image 1 is target 1 and target 3, and the target 4 in image 2 is false-alarm, dynamic in image 2
Target is target 1 and target 3.
In embodiments of the present invention, from being pre-processed to acquisition image, calculus of differences, go false-alarm to finally obtaining dynamic mesh
The process of target whole process carries out automatic processing and has the advantages that compared with prior art
First, due to being moved before driven target detection to the pretreatment of acquisition image, the calculus of differences for acquiring image, to final
Target detection, whole process are automated completion, do not need manual intervention, effectively improve the reality that moving-target detects automatically
Shi Xing.
Second, when each calculus of differences, according to the corresponding two-value threshold of two acquisition images calculating for participating in calculus of differences,
Allow the binary map generated more accurately to reflect moving-target therein, improves the percentage of head rice of moving-target detection.
Third will not be existed simultaneously in phase using moving-target self-characteristic and the moving-target feature pairs of in adjacent moment
Moving-target in the binary map of adjacent acquisition time carries out false-alarm removal, significantly reduces false alarm rate.
Second embodiment
Fig. 6 is please referred to, Fig. 6 shows the box signal of moving-target automatic detection device 200 provided in an embodiment of the present invention
Figure.Moving-target automatic detection device 200 is applied to image processing equipment 100 comprising obtains module 201;Greyscale transformation module
202;Preprocessing module 203;Difference block 204;False-alarm removes module 205.
Module 201 is obtained, the optical satellite figure of at least two frame different moments acquisition for obtaining image acquisition device acquisition
Picture.
In embodiments of the present invention, module 201 is obtained for executing step S101.
Greyscale transformation module 202, for when Optical satellite images be multi-spectral Satellite Images when, to Optical satellite images into
Row greyscale transformation, and the image after greyscale transformation is subjected to super-resolution rebuilding, obtain image subject to registration.
In embodiments of the present invention, greyscale transformation module 202 is for executing step S102.
Preprocessing module 203, for using any one frame Optical satellite images at least two frame Optical satellite images as base
Quasi- image is pre-processed to obtain corresponding registration image to remaining image subject to registration in addition to benchmark image.
In embodiments of the present invention, preprocessing module 203 is for executing step S103 and its sub-step S1031-S1032.
In embodiments of the present invention, preprocessing module 203 is specifically also used to: extracting fisrt feature point and the school of benchmark image
The second feature point of positive image;Characteristic matching is carried out to fisrt feature point and second feature point and obtains benchmark image and correction image
Between coordinate conversion parameter;On the basis of benchmark image, correction image is generated in benchmark image according to coordinate conversion parameter
Registration image in coordinate system.
Difference block 204, for obtaining at least two to benchmark image and all registration images successively calculus of differences two-by-two
Corresponding binary map.
In embodiments of the present invention, difference block 204 is for executing step S104 and its sub-step S1041-S1045.
In embodiments of the present invention, difference block 204 is specifically also used to:
It is adjacent that acquisition time is successively obtained from benchmark image and all registration images according to the sequence of acquisition time
Two images, and using the image of acquisition time morning in two adjacent images of acquisition time as the first image, when by acquiring
Between evening image as the second image;
It, will be with the first Image Acquisition by the first image of conduct of acquisition time the latest in benchmark image and all registration images
Temporally adjacent image as the second image,
And
Benchmark image and all registration images carry out Background Reconstruction and obtain background image;
Successively using benchmark image and all registration images as the first image, using background image as the second image.
False-alarm removes module 205, for carrying out object information extraction to every binary map, determines every binary map
In target object, and to the target object carry out false-alarm removal, obtain the moving-target in every binary map.
In embodiments of the present invention, false-alarm removal module 205 is for executing step S105 and its sub-step S1051-
S1052。
In conclusion a kind of moving-target automatic testing method provided by the invention and device, the moving-target detect automatically
Method is applied to image processing equipment, and image processing equipment and image acquisition device communicate to connect, which comprises obtain image
The Optical satellite images of at least two frame different moments acquisition of collector acquisition;With any one at least two frame Optical satellite images
Frame Optical satellite images are pre-processed to obtain pair as benchmark image to remaining image subject to registration in addition to benchmark image
The registration image answered;To benchmark image and all registration images successively calculus of differences two-by-two, at least two corresponding two-values are obtained
Figure;Object information extraction is carried out to every binary map, determines the target object in every binary map, and carry out to target object
False-alarm removal, obtains the moving-target in every binary map.Compared with prior art, the embodiment of the present invention by acquisition image into
Row pretreatment, calculus of differences, the process for going false-alarm to finally obtain moving-target carry out automatic processing, effectively improve moving-target
Automatically the real-time detected.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through
Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, flow chart and block diagram in attached drawing
Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product,
Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code
Part, a part of the module, section or code, which includes that one or more is for implementing the specified logical function, to be held
Row instruction.It should also be noted that function marked in the box can also be to be different from some implementations as replacement
The sequence marked in attached drawing occurs.For example, two continuous boxes can actually be basically executed in parallel, they are sometimes
It can execute in the opposite order, this depends on the function involved.It is also noted that every in block diagram and or flow chart
The combination of box in a box and block diagram and or flow chart can use the dedicated base for executing defined function or movement
It realizes, or can realize using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.It needs
Illustrate, herein, relational terms such as first and second and the like be used merely to by an entity or operation with
Another entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this realities
The relationship or sequence on border.Moreover, the terms "include", "comprise" or its any other variant are intended to the packet of nonexcludability
Contain, so that the process, method, article or equipment for including a series of elements not only includes those elements, but also including
Other elements that are not explicitly listed, or further include for elements inherent to such a process, method, article, or device.
In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element
Process, method, article or equipment in there is also other identical elements.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should also be noted that similar label and letter exist
Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and explained.
Claims (10)
1. a kind of moving-target automatic testing method, which is characterized in that be applied to image processing equipment, described image processing equipment with
Image acquisition device communication connection, which comprises
Obtain the Optical satellite images of at least two frame different moments acquisition of image acquisition device acquisition;
Any one frame Optical satellite images are as benchmark image using in Optical satellite images described at least two frames, to except the benchmark
Remaining image subject to registration except image is pre-processed to obtain corresponding registration image;
To the benchmark image and all registration images successively calculus of differences two-by-two, at least two corresponding binary maps are obtained;
Object information extraction is carried out to every binary map, determines the target object in every binary map, and to the mesh
It marks object and carries out false-alarm removal, obtain the moving-target in every binary map.
2. moving-target automatic testing method as described in claim 1, which is characterized in that described pair in addition to the benchmark image
Remaining image subject to registration pre-processed the step of obtaining corresponding registration image, comprising:
According to the gray value of the benchmark image, gamma correction is carried out to the image subject to registration, is obtained and the benchmark
Image grayscale responds consistent correction image;
Benchmark image carries out image registration to the correction image and obtains corresponding registration image.
3. moving-target automatic testing method as claimed in claim 2, which is characterized in that the benchmark image is to the school
Positive image carries out the step of image registration obtains corresponding registration image, comprising:
Extract the fisrt feature point of the benchmark image and the second feature point of the correction image;
Characteristic matching is carried out to the fisrt feature point and second feature point and obtains the benchmark image and the correction chart
Coordinate conversion parameter as between;
On the basis of the benchmark image, the correction image is generated in the benchmark image according to the coordinate conversion parameter
Registration image in coordinate system.
4. moving-target automatic testing method as claimed in claim 2, which is characterized in that the ash according to the benchmark image
Angle value, before the step of carrying out gamma correction to the image subject to registration, the method also includes:
When the Optical satellite images are multi-spectral Satellite Images, greyscale transformation is carried out to the Optical satellite images, and will
Image after greyscale transformation carries out super-resolution rebuilding, obtains image subject to registration.
5. moving-target automatic testing method as described in claim 1, which is characterized in that described to the benchmark image and all
The step of being registrated image successively calculus of differences two-by-two, obtaining at least two corresponding binary maps, comprising:
The first adjacent image of acquisition time and are obtained from the benchmark image and all registration images according to preset rules
Two images;
Two-value threshold is calculated according to second image and the gray value of the first image;
Pixel in pixel and second image in the first image is subjected to calculus of differences, obtains difference result;
It, will be in bianry image corresponding with the first image when the difference result is more than or equal to the two-value threshold
Corresponding pixel sets the first preset value;
It, will corresponding picture in bianry image corresponding with the first image when the difference result is less than the two-value threshold
Member sets the second preset value.
6. moving-target automatic testing method as claimed in claim 5, which is characterized in that it is described according to preset rules from the base
The step of obtaining acquisition time adjacent the first image and the second image in quasi- image and all registration images, comprising:
It is adjacent that acquisition time is successively obtained from the benchmark image and all registration images according to the sequence of acquisition time
Two images will adopt and using the image of acquisition time morning in two adjacent images of the acquisition time as the first image
Collect the image in evening time as the second image;
It, will be with the first image by the first image of conduct of acquisition time the latest in the benchmark image and all registration images
The adjacent image of acquisition time is as the second image.
7. moving-target automatic testing method as claimed in claim 6, which is characterized in that the registration image at least two,
The method also includes:
Background Reconstruction, which is carried out, according to the benchmark image and all registration images obtains background image;
Successively using the benchmark image and all registration images as the first image, using the background image as the second image.
8. moving-target automatic testing method as described in claim 1, which is characterized in that described to carry out void to the target object
The step of police removes, and obtains the moving-target in every binary map, comprising:
Condition is screened out according to default, primary election moving-target is obtained after the static object in the target object is screened out;
The primary election moving-target in the adjacent binary map of acquisition time will not be existed simultaneously to remove as false-alarm, finally obtains corresponding two
The moving-target being worth in figure.
9. a kind of moving-target automatic detection device, which is characterized in that be applied to image processing equipment, described image processing equipment with
Image acquisition device communication connection, described device include:
Module is obtained, the Optical satellite images of at least two frame different moments acquisition for obtaining image acquisition device acquisition;
Preprocessing module, for using in Optical satellite images described at least two frames any one frame Optical satellite images as reference map
Picture is pre-processed to obtain corresponding registration image to remaining image subject to registration in addition to the benchmark image;
Difference block, for obtaining at least two pairs to the benchmark image and all registration images successively calculus of differences two-by-two
The binary map answered;
False-alarm removes module, for carrying out object information extraction to every binary map, determines the mesh in every binary map
Object is marked, and false-alarm removal is carried out to the target object, obtains the moving-target in every binary map.
10. moving-target automatic detection device as claimed in claim 9, which is characterized in that the preprocessing module is specifically used for:
According to the gray value of the benchmark image, gamma correction is carried out to the image subject to registration, is obtained and the benchmark
Image grayscale responds consistent correction image;
Benchmark image carries out image registration to the correction image and obtains corresponding registration image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811063853.4A CN109035306B (en) | 2018-09-12 | 2018-09-12 | Moving target automatic detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811063853.4A CN109035306B (en) | 2018-09-12 | 2018-09-12 | Moving target automatic detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109035306A true CN109035306A (en) | 2018-12-18 |
CN109035306B CN109035306B (en) | 2020-12-15 |
Family
ID=64621160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811063853.4A Active CN109035306B (en) | 2018-09-12 | 2018-09-12 | Moving target automatic detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109035306B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782459A (en) * | 2019-01-08 | 2020-02-11 | 北京嘀嘀无限科技发展有限公司 | Image processing method and device |
CN112308771A (en) * | 2019-07-31 | 2021-02-02 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11234660A (en) * | 1998-02-18 | 1999-08-27 | Toshiba Corp | Moving object detection processing unit and its method |
CN101930072A (en) * | 2010-07-28 | 2010-12-29 | 重庆大学 | Multi-feature fusion based infrared small dim moving target track starting method |
CN104851097A (en) * | 2015-05-19 | 2015-08-19 | 西安电子科技大学 | Multichannel SAR-GMTI method based on target shape and shadow assistance |
CN106874949A (en) * | 2017-02-10 | 2017-06-20 | 华中科技大学 | A kind of moving platform moving target detecting method and system based on infrared image |
CN106887010A (en) * | 2017-01-13 | 2017-06-23 | 西北工业大学深圳研究院 | Ground moving target detection method based on high-rise scene information |
CN107563961A (en) * | 2017-09-01 | 2018-01-09 | 首都师范大学 | A kind of system and method for the moving-target detection based on camera sensor |
CN107945212A (en) * | 2017-11-29 | 2018-04-20 | 中国人民解放军火箭军工程大学 | Infrared small and weak Detection of Moving Objects based on inertial navigation information auxiliary and background subtraction |
-
2018
- 2018-09-12 CN CN201811063853.4A patent/CN109035306B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11234660A (en) * | 1998-02-18 | 1999-08-27 | Toshiba Corp | Moving object detection processing unit and its method |
CN101930072A (en) * | 2010-07-28 | 2010-12-29 | 重庆大学 | Multi-feature fusion based infrared small dim moving target track starting method |
CN104851097A (en) * | 2015-05-19 | 2015-08-19 | 西安电子科技大学 | Multichannel SAR-GMTI method based on target shape and shadow assistance |
CN106887010A (en) * | 2017-01-13 | 2017-06-23 | 西北工业大学深圳研究院 | Ground moving target detection method based on high-rise scene information |
CN106874949A (en) * | 2017-02-10 | 2017-06-20 | 华中科技大学 | A kind of moving platform moving target detecting method and system based on infrared image |
CN107563961A (en) * | 2017-09-01 | 2018-01-09 | 首都师范大学 | A kind of system and method for the moving-target detection based on camera sensor |
CN107945212A (en) * | 2017-11-29 | 2018-04-20 | 中国人民解放军火箭军工程大学 | Infrared small and weak Detection of Moving Objects based on inertial navigation information auxiliary and background subtraction |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782459A (en) * | 2019-01-08 | 2020-02-11 | 北京嘀嘀无限科技发展有限公司 | Image processing method and device |
CN112308771A (en) * | 2019-07-31 | 2021-02-02 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109035306B (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363858B (en) | Three-dimensional face reconstruction method and system | |
CN109978839B (en) | Method for detecting wafer low-texture defects | |
Pape et al. | 3-D histogram-based segmentation and leaf detection for rosette plants | |
CN110458895B (en) | Image coordinate system conversion method, device, equipment and storage medium | |
CN109685045B (en) | Moving target video tracking method and system | |
CN105809673B (en) | Video foreground dividing method based on SURF algorithm and the maximum similar area of merging | |
CN108550166B (en) | Spatial target image matching method | |
CN110992366B (en) | Image semantic segmentation method, device and storage medium | |
CN111383252B (en) | Multi-camera target tracking method, system, device and storage medium | |
CN108257125B (en) | Depth image quality non-reference evaluation method based on natural scene statistics | |
CN109035306A (en) | Moving-target automatic testing method and device | |
Apdilah et al. | A study of Frei-Chen approach for edge detection | |
CN114119695A (en) | Image annotation method and device and electronic equipment | |
CN106709941A (en) | Key point screening method for spectrum image sequence registration | |
CN111797832A (en) | Automatic generation method and system of image interesting region and image processing method | |
CN108960285B (en) | Classification model generation method, tongue image classification method and tongue image classification device | |
CN115294035B (en) | Bright spot positioning method, bright spot positioning device, electronic equipment and storage medium | |
CN111079752A (en) | Method and device for identifying circuit breaker in infrared image and readable storage medium | |
CN116402867A (en) | Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC | |
CN111191708A (en) | Automatic sample key point marking method, device and system | |
CN116188826A (en) | Template matching method and device under complex illumination condition | |
Schubert et al. | Robust registration and filtering for moving object detection in aerial videos | |
Zhang et al. | A combined approach to single-camera-based lane detection in driverless navigation | |
CN109284707A (en) | Moving target detection method and device | |
CN114663681A (en) | Method for reading pointer type meter and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |