CN101599175B - Detection method for determining alteration of shooting background and image processing device - Google Patents

Detection method for determining alteration of shooting background and image processing device Download PDF

Info

Publication number
CN101599175B
CN101599175B CN200910086964.1A CN200910086964A CN101599175B CN 101599175 B CN101599175 B CN 101599175B CN 200910086964 A CN200910086964 A CN 200910086964A CN 101599175 B CN101599175 B CN 101599175B
Authority
CN
China
Prior art keywords
background
image
identified areas
current frame
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200910086964.1A
Other languages
Chinese (zh)
Other versions
CN101599175A (en
Inventor
邓亚峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mid Star Technology Ltd By Share Ltd
Original Assignee
Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vimicro Corp filed Critical Vimicro Corp
Priority to CN200910086964.1A priority Critical patent/CN101599175B/en
Publication of CN101599175A publication Critical patent/CN101599175A/en
Application granted granted Critical
Publication of CN101599175B publication Critical patent/CN101599175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a detection method for determining alteration of shooting background and an image processing device; wherein the method comprises the following steps: determining a background identification area in a background reference image; identifying an imaging area of a feature in source background in the background reference image by the background identification area; using the current frame image to obtain a contrast image of the background reference image; contrasting an image in the background identification area in the background reference image with the image in corresponding area in the contrast image to obtain a contrast result; according to the contrast result, detecting whether the shooting background in the current frame image conforms to the source background, if yes, determining the shooting background unaltered; otherwise, determining the shooting background altered. The technical proposal provided by the embodiment of the invention can effectively reduce the false detection probability on whether the shooting background is alters or not.

Description

Determine detection method and the image processing equipment of alteration of shooting background
Technical field
The present invention relates to technical field of image processing, particularly a kind of detection method of definite alteration of shooting background and image processing equipment.
Background technology
In practical application, for convenience of image capture device is processed as the image that camera photographs, conventionally the image photographing is divided into two parts, foreground image and background image.Wherein, the background image major embodiment background of finding a view, in the background of finding a view, can comprise stationary body as static wall, be placed in the ornament of fixed position, etc. material object.Foreground image major embodiment dynamic object, as the vehicle of the people who walks about before camera, enforcement, etc.
Carry out for convenience image processing, the background image based on photographing is set up background image model conventionally.But if the background of finding a view changes, as the material object in background increases, or the focal length of camera or putting position be changed, etc., the background image that causes collecting is changed, correspondingly, need to upgrade background image model by the background image based on being updated.
In other situations; interference due to foreground image; conventionally can cause flase drop; be that background changes, but because of the interference of prospect, causing device handler flase drop is that background image changes; thereby cause device handler need to re-execute the operation of setting up background image model; cause the unnecessary burden of device handler, and increase the energy consumption of equipment, be unfavorable for energy-conservation.
Prior art takes corresponding measure to reduce this erroneous judgement probability, and this measure mainly comprises:
Reference background frame and present frame are done to difference processing, if difference result shows present frame and reference background frame, differ larger, determine that background is modified; Otherwise, if showing present frame and reference background frame, difference result differs less, determine that background is not modified.
Prior art can reduce erroneous judgement probability to a certain extent, but inventor is in realizing process of the present invention, and still at least there are the following problems to find prior art:
Difference at reference background frame and present frame is larger, but in fact image capture device for picked-up background do not change, will there is flase drop in prior art.
Therefore, prior art is difficult to effectively reduce the false detection probability whether shooting background is changed.
Summary of the invention
The embodiment of the present invention provides a kind of detection method and image processing equipment of definite alteration of shooting background, effectively reduces the technical matters of the erroneous judgement probability whether shooting background be changed to solve being difficult to of existing in the prior art.
For solving the problems of the technologies described above, embodiments of the invention provide a kind of detection method of definite alteration of shooting background, comprising:
Determine the background identified areas in background reference image; The imaging region of characteristic body in described background reference image in described background identified areas identification sources background;
Utilize current frame image, obtain the contrast images of described background reference image;
By the image in background identified areas in described background reference image, compare with the image in respective regions in described contrast images, obtain comparison result;
According to described comparison result, whether the shooting background detecting in current frame image conforms to described source background, if conform to, determines that shooting background does not become; Otherwise, determine alteration of shooting background.
Preferably, the background identified areas in described definite background reference image comprises:
Utilize edge detecting technology, obtain a plurality of pixels that edge intensity value computing in described background reference image meets preset requirement;
Calculate all connected regions that described a plurality of pixel can form;
From all connected regions, select described background identified areas.
Preferably, the described edge detecting technology of utilizing, from described background reference image, extract edge intensity value computing and meet pre-conditioned a plurality of pixels and comprise:
Calculate the edge intensity value computing of each pixel on described background reference image;
The edge intensity value computing of described each pixel is done to filtering and process, obtain the renewal edge intensity value computing of each pixel;
Select and upgrade the pixel that edge intensity value computing is greater than preset first threshold value, as described a plurality of pixels.
Preferably, the described edge intensity value computing to described each pixel is done filtering and is processed, and the renewal edge intensity value computing that obtains each pixel comprises:
According to preset rules, the edge intensity value computing of choosing specified pixel point in default spectral window is upgraded the edge intensity value computing of described pixel, obtains described renewal edge intensity value computing;
Described spectral window is centered by described pixel and have default big or small.
Preferably, describedly from all connected regions, select described background identified areas and comprise:
Select the connected region that area is greater than default Second Threshold, as described background identified areas.
Preferably, the described current frame image that utilizes, the contrast images of obtaining described background reference image comprises:
Consecutive frame image to described current frame image and this current frame image is done calculus of differences, obtains difference image; Described difference image comprises by the described current frame image all pixels formed pixel region of variation different from pixel value on described consecutive frame image;
Described difference image is used as to described contrast images.
Preferably, described by the image in background identified areas in described background reference image, compare and comprise with image in respective regions in described contrast images:
Calculate described background identified areas shared region in described background reference image, with the intersection area in described pixel region of variation shared region in described contrast images;
Detect described intersection area area occupied and whether meet preset requirement, described preset requirement comprises: the ratio between described intersection area area occupied and described background identified areas area occupied is no more than default the 3rd threshold value.
Preferably, described in, show that comparison result comprises: described ratio is no more than described the 3rd threshold value;
Described according to described comparison result, whether the shooting background detecting in current frame image conforms to and comprises with described source background: detect described shooting background and conform to described source background; Or,
Describedly show that comparison result comprises: described ratio surpasses described the 3rd threshold value;
Described according to described comparison result, whether the shooting background detecting in current frame image conforms to and comprises with described source background: detect described shooting background and do not conform to described source background.
Preferably, described according to described comparison result, whether the shooting background detecting in current frame image conforms to and comprises with described source background:
If described comparison result is: described ratio surpasses described the 3rd threshold value, utilize block-matching technique, compare described background reference image and described current frame image, from described current frame image find out with described background identified areas in the matching image that matches of image, image in wherein said background identified areas comprises a plurality of to be matched, and described matching image comprises a plurality of match block;
Obtain the displacement beacon information between each to be matched position in the image in described background identified areas and match block position corresponding in described matching image;
According to described displacement beacon information, add up in described matching image, to be matched position of the correspondence of comparing, offset deviation surpasses the number of all match block of default the 4th threshold value, if the number of described all match block surpasses default the 5th threshold value, determine that the shooting background in current frame image does not conform to described source background.
For solving the problems of the technologies described above, embodiments of the invention also provide a kind of image processing equipment, comprising:
Processing unit, for determining the background identified areas of background reference image; The imaging region of characteristic body in described background reference image in described background identified areas identification sources background;
Acquiring unit, for utilizing current frame image, obtains the contrast images of described background reference image;
Compare of analysis unit, for by the image in described background reference image background identified areas, compares with the image in respective regions in described contrast images, obtains comparison result; According to described comparison result, whether the shooting background detecting in current frame image conforms to described source background, if conform to, determines that shooting background does not become; Otherwise, determine alteration of shooting background.
Preferably, described processing unit comprises:
Computing unit, for utilizing edge detecting technology, obtains a plurality of pixels that edge intensity value computing in described background reference image meets preset requirement; Calculate all connected regions that described a plurality of pixel can form;
Selected cell, for selecting described background identified areas from all connected regions.
Preferably, described acquiring unit comprises:
Choose unit, for select the consecutive frame image of current frame image and this current frame image from the image of shooting;
Calculus of differences unit, for described current frame image and described consecutive frame image are done to calculus of differences, calculate the difference image that is used as described contrast images, described difference image comprises by the described current frame image all pixels formed pixel region of variation different from pixel value on described consecutive frame image.
Preferably, described compare of analysis unit comprises:
Common factor computing unit, for calculating described background identified areas in the shared region of described background reference image, with the intersection area in described pixel region of variation shared region in described contrast images;
Whether detecting unit, meet preset requirement for detection of described intersection area area occupied, and described preset requirement comprises: the ratio between described intersection area area occupied and described background identified areas area occupied is no more than default the 3rd threshold value,
Whether analysis and processing unit, for the comparison result obtaining according to detecting unit, analyze described shooting background and change.
Preferably, described analysis and processing unit comprises:
Matching unit, for according to described comparison result being: described ratio surpasses described the 3rd threshold value, utilize block-matching technique, compare described background reference image and described current frame image, from described current frame image find out with described background identified areas in the matching image that matches of image, image in wherein said background identified areas comprises a plurality of to be matched, and described matching image comprises a plurality of match block;
Displacement computing unit, for obtaining the displacement beacon information between each to be matched position of image in described background identified areas and match block position corresponding in described matching image;
Statistical analysis unit, be used for according to described displacement beacon information, add up in described matching image, to be matched position of the correspondence of comparing, offset deviation surpasses the number of all match block of default the 4th threshold value, if the number of described all match block surpasses default the 5th threshold value, determine that the shooting background in current frame image does not conform to described source background.
The technical scheme that embodiments of the invention provide, by determining the background identified areas of characteristic body imaging region in identification sources background, utilize the image in background identified areas, with the processing that compares of image in respective regions in the contrast images obtaining based on current frame image, whether detect the source background that the shooting background that embodies on the image of shooting embodies with background reference image conforms to, in embodiments of the invention, adopt comparison targetedly to process, image on comparison different images in specific region, as above-mentioned, image in respective regions in image in background identified areas in background reference image and contrast images is contrasted, the prior art of comparing, improve image ratio to comparing the precision of scope in processing, therefore can effectively reduce erroneous judgement probability, improve the accuracy detecting.
Further, the testing result that terminal handler can be based on correct, determines whether to re-establish background image model, effectively reduces the number of times of the execution repetitive operation of terminal handler, stablizes the performance of terminal handler, and effectively reduces energy consumption.
Accompanying drawing explanation
Fig. 1 determines the detection method process flow diagram of alteration of shooting background in embodiments of the invention;
Fig. 2 is the schematic diagram of background identified areas in embodiments of the invention;
Fig. 3 determines the process flow diagram of background identified areas in embodiments of the invention;
Fig. 4 is the schematic diagram of connected region in embodiments of the invention;
Fig. 5 determines the process flow diagram of marginal point in embodiments of the invention;
Fig. 6 is the schematic diagram of spectral window in embodiments of the invention;
Fig. 7 determines the process flow diagram whether shooting background is changed in embodiments of the invention;
Fig. 8 is the schematic diagram in pixel region of variation shared region in contrast images in embodiments of the invention;
Fig. 9 is intersection area schematic diagram in embodiments of the invention;
Figure 10 is the structural representation of image processing equipment in embodiments of the invention;
Figure 11 is another structural representation of image processing equipment in embodiments of the invention.
Embodiment
Whether prior art adopts the technological means directly reference background frame and present frame being compared, come the background of detected image collecting device shooting to change, and because this technological means is simple, therefore, the probability that testing result is made mistakes is larger.
In embodiments of the invention, for detecting captured background, whether be changed, not merely reference background frame compared with present frame, but determine the background identified areas in background reference image, based on this background identified areas and background reference image, determine whether background changes.
Below in conjunction with specific embodiment and accompanying drawing, the technical scheme of embodiments of the invention is elaborated.
Referring to Fig. 1, Fig. 1 determines the detection method process flow diagram of alteration of shooting background in embodiments of the invention, and this flow process can comprise the following steps:
Step 101, determine the background identified areas in background reference image; The imaging region of characteristic body in background reference image in this background identified areas identification sources background.
Specifically, in embodiments of the invention, the vicinity points of comparing in imaging region image appearance background reference image, pixel value changes the image information in region that pixel surrounds greatly, for example as, captured source background is a potted flower of putting before metope within the scope of certain space and wall, on background reference image, generally can there is sudden change in the compare pixel value of other area pixel points of the pixel value of the shade imaging edges of regions pixel of potted flower and potted flower, potted flower and shade can be regarded the characteristic body in the background of source as, the imaging region of potted flower and shade can be used as background identified areas.
Can be referring to Fig. 2, Fig. 2 is the schematic diagram of background identified areas in embodiments of the invention.In Fig. 2, background identified areas comprises three sub regions, indicates respectively with A, B, C.
Step 102, utilize current frame image, the contrast images of background extraction reference picture.
Step 103, by the image in background identified areas in background reference image, compare with the image in respective regions in contrast images, obtain comparison result.
Step 104, according to comparison result, whether the shooting background detecting in current frame image conforms to source background, if conform to, definite shooting background does not become; Otherwise, determine alteration of shooting background.
In practical application, if testing result is alteration of shooting background, can provide as required the prompting that need to upgrade background model.
Referring to Fig. 3, Fig. 3 determines the process flow diagram of background identified areas in embodiments of the invention, and this flow process can comprise the following steps:
Step 301, utilize edge detecting technology, determine that edge intensity value computing in background reference image meets a plurality of pixels of preset requirement, claims that the plurality of pixel is marginal point.
In embodiments of the invention, by means of edge detecting technology, determine background identified areas, to be positioned at the image of background identified areas on subsequent analysis different frame image.And edge detecting technology is generally used for extracting image border in the prior art as the edge of the portrait on digital photograph, so that follow-up portrait for extracting is for further processing.
In embodiments of the invention, for determining marginal point, can adopt edge detecting technology to calculate the edge intensity value computing of each pixel on background reference image, comprise: adopt boundary operator, extract horizontal edge and the vertical edge of each pixel on background reference image, the quadratic sum of calculated level edge and vertical edge, this quadratic sum can be used as the edge intensity value computing of this pixel; Wherein, boundary operator has a lot of definition modes, can adopt prewitt operator, Sobel (sobel) or Tuscany (canny) operator etc., and conventionally, the operand of prewitt operator is relatively less, and the operand of sobel and canny operator is relatively large.
And then the edge intensity value computing of each pixel that can be based on calculating, selects the marginal point that edge intensity value computing meets preset requirement.
Step 302, obtain all connected regions that marginal point can form.
Because marginal point is discrete, therefore, for finally determining background identified areas, need to connect discrete marginal point, to sketch the contours of connected region one by one.Particularly, in embodiments of the invention, can adopt connected domain analysis technology, the marginal point based on discrete, obtains connected region.
Referring to Fig. 4, Fig. 4 is the schematic diagram of connected region in embodiments of the invention.In Fig. 4, the connected region that marginal point can form comprises four, is indicated respectively by A, B, C and D.
Step 303, from all connected regions, select background identified areas.
In embodiments of the invention, consider that the connected region of calculating based on marginal point may not be all imaging regions of characteristic body, and may produce because of the error of calculation.Therefore, in embodiments of the invention, for reduce error as far as possible, not directly by all connected regions all as background identified areas, but from a plurality of connected regions, select meet preset requirement as background identified areas, concrete as, select the connected region that region area is greater than predetermined threshold value T1, as background identified areas.
As shown in Figure 4, in embodiments of the invention, select tri-connected regions of A, B, C of area coinciding requirement, the A in Fig. 2, B, C tri-sub regions, as effective background identified areas, and be not used as background identified areas by the less D region of area.
In practical application, also can choose suitable connected region as background identified areas according to actual conditions.
Enumerate the embodiment that determines marginal point in above-mentioned steps 301 below.Referring to Fig. 5, Fig. 5 determines the process flow diagram of marginal point in embodiments of the invention, and this flow process can comprise the following steps:
The edge intensity value computing of each pixel on step 501, calculating background reference image, the data message of the edge intensity value computing that title comprises each pixel is edge strength image.
Step 502, utilize default spectral window, edge intensity image to do filtering to process, upgrade the edge intensity value computing of each pixel, obtain the renewal edge intensity value computing of each pixel.
In practical application, can select edge intensity value computing to be used as marginal point at the pixel of preset range, as edge intensity value computing surpasses predetermined threshold value T2, select corresponding pixel as marginal point.But the marginal point of selecting like this may not have good representativeness.
For the marginal point that makes to select is more suitable, in embodiments of the invention, edge intensity image is done further filtering and is processed, to upgrade the edge intensity value computing of each pixel.In embodiments of the invention, the concrete sequence wave filter (rank-filter) that adopts carries out filtering processing, specifically comprises:
Default spectral window, the large I of this spectral window is made as RW*RH;
If the center of each spectral window is each pixel, and can be referring to Fig. 6, Fig. 6 is the schematic diagram of spectral window in embodiments of the invention; Can select the edge intensity value computing of specified pixel point in each spectral window according to preset rules, upgrade the edge intensity value computing of this spectral window central pixel point.Concrete rank-filter filtering can comprise:
To in a spectral window comprise pixel edge intensity value computing do sequence and process;
According to the value of the filtering parameter r of preliminary election, select specified pixel point; Wherein, parameter r arranges embodiment preset rules, and its value can be located between 0 to 1, as gets r=0.8, represents that selected edge intensity value computing is greater than in spectral window 80% edge intensity value computing.For example as, in spectral window, have the edge intensity value computing of five pixels, by sequence from big to small, be: 1,1.1,2,4,4.3, if set r=0.8, selecting edge intensity value computing is 4, as the renewal edge intensity value computing of central pixel point.
Adopt above-mentioned rank-filter filtering, upgrade the edge intensity value computing of each pixel.
Based on above-mentioned rank-filter filtering, process, can make the degree of scatter of the pixel that the imaging region edge pixel value of characteristic body undergos mutation relatively restrain, so that follow-up, select the relatively concentrated marginal point in position.In practical application, also can adopt other filtering techniques to upgrade the edge intensity value computing of each pixel.
Step 503, the renewal edge intensity value computing based on each pixel, select and upgrade the marginal point of edge intensity value computing in preset range.
In this step 503, preset range comprises: the scope that is greater than predetermined threshold value T3.
Shown in above-mentioned Fig. 5, flow process finishes, and Fig. 5 flow process has been enumerated the specific embodiment of choosing marginal point, in practical application, also can adopt other modes to select for defining the marginal point in characteristic body imaging region and other regions.
Based on determined background identified areas, in embodiments of the invention, further analyze the source background that target background that other two field pictures embody embodies with background reference image and whether conform to.
Referring to Fig. 7, Fig. 7 determines the process flow diagram whether shooting background is changed in embodiments of the invention, and this flow process can comprise the following steps:
Step 701, utilize the image shoot, obtain for background identified areas in the contrast images that compares of source images, wherein, getting the image of shooting is that present image is adjacent two field picture.
In this step 701, particularly, can do calculus of differences to present image and consecutive frame image, calculate the difference image of two two field pictures, this difference image is used as to contrast images.Contrast images can embody the different formed pixel region of variation of all pixels of pixel value on two two field pictures.
The way of concrete calculus of differences can comprise: calculate the absolute value of the luminance component difference of corresponding pixel points in adjacent two two field pictures, retain the pixel that absolute value surpasses preset value; Or calculate the absolute value of difference of RGB (RGB) three colouring components of corresponding pixel points in adjacent two two field pictures, retain the pixel that absolute value surpasses preset value.In practical application, also can adopt other modes to calculate contrast images.
Referring to Fig. 8, Fig. 8 is the schematic diagram in pixel region of variation shared region in contrast images in embodiments of the invention.In Fig. 8, pixel region of variation is indicated by E.
Step 702, obtain in contrast images the intersection area of background identified areas on pixel region of variation and background reference image.
In practical application, the specific practice in the region that seeks common ground can comprise:
In advance background reference image is done to binary conversion treatment, the binary value of establishing pixel in background identified areas is " 1 ", and the binary value of the pixel in non-background identified areas is " 0 ", obtains binaryzation background reference image;
After obtaining contrast images, contrast images is done to binary conversion treatment, the binary value of establishing pixel in pixel region of variation is " 1 ", in non-pixel region of variation, the binary value of pixel is " 0 ", obtains binaryzation contrast images;
Binaryzation contrast images and binaryzation background reference image are done to logical operation; In embodiments of the invention, according to the setting of above binary value, the algorithm of logical operation is specially AND operation;
Calculate binary value and be above-mentioned common factor for the region of " 1 ".
In practical application, also the binary value of each pixel can be set as required, and corresponding logical operation algorithm is set, to calculate above-mentioned common factor.
Separately it should be noted that, in embodiments of the invention, by asking the mode of the common factor of pixel region of variation and background identified areas, realize the contrast of the image in respective regions in background identified areas and contrast images and process, calculate more for convenience.In practical application, also can directly the image in respective regions in background identified areas and contrast images be compared.
Referring to Fig. 9, Fig. 9 is intersection area schematic diagram in embodiments of the invention.In Fig. 9, the common factor of binaryzation contrast images and binaryzation background reference image comprises two regions, is indicated respectively by ae, ce.
Whether step 703, detection intersection area area occupied meet preset requirement, if meet, determine that shooting background does not change, execution step 704; Otherwise, execution step 705.
In embodiments of the invention, set preset requirement comprises: intersection area area is no more than predetermined threshold value T4 with the ratio of background identified areas area occupied.
The ae that use is calculated and ce area occupied sum, divided by background identified areas area occupied, are compared the ratio of calculating with T4, if do not surpass, can determine that shooting background does not change.
In practical application, preset requirement also can be set be: directly by intersection area area and predetermined threshold value T5 comparison, if do not surpass T5, determine that shooting background does not change.
Step 704, continuation monitoring next frame image, return to execution step 701.
Step 705, utilize block-matching technique, comparison background reference image and present image, from present image find out with background reference image on the matching image that matches of source images, wherein source images comprises a plurality of to be matched, matching image comprises a plurality of match block.
In practical application, the region area that can judge common factor in above-mentioned steps 703 does not meet after preset requirement, directly determines that the background of taking changes.But in practical application, may occur that ae and ce area sum are larger, but real background unchanged situation, as closer from camera in front scenery, the Background region area that foreground picture is covered is larger, in this situation, if directly the region area based on occuring simultaneously judges whether background changes, may cause background not become but the erroneous judgement that is defined as changing.
In embodiments of the invention, for reducing erroneous judgement probability, propose further innovative approach as far as possible, further utilize optical flow analysis technology, analyze intersection area area and be more greatly because shooting background changes or because the interference of foreground picture causes.
Source images is the imaging of background identified areas in background reference image.In this step 705, particularly, can be based on default piece matching strategy, by background reference image to be matched and present image, be divided into the piece that a plurality of sizes are RWRH, based on existing block-matching technique, find out in present image, with each to be matched match block matching in background reference image.Institute it should be noted that, in embodiments of the invention, the pixel value that so-called coupling not represents each pixel in match block is identical with the pixel value of each pixel in corresponding to be matched, and the pixel value of other pixels in the matching window that represents to compare, in match block, the pixel value of each pixel and the pixel value of to be matched interior pixel differ less.
Step 706, obtain the displacement beacon information between each to be matched position in source images and match block position corresponding in matching image.
In this step 706, particularly, for each to be matched in source images, can this to be matched position be starting point, in matching image, this to be matched corresponding match block position is terminal, draws displacement vector, and this displacement vector can be used for representing above-mentioned displacement beacon information.
Obtain in source images each displacement vector of to be matched.
The displacement beacon information that step 707, basis are obtained, determines whether shooting background changes, if do not become, returns to execution step 704; If change, execution step 708.
In this step 707, specifically determine that the operation whether shooting background changes can comprise:
For the sub regions in background identified areas, find out this subregion intrinsic displacement size and surpass to be matched of predetermined threshold value T6, and for convenience of statement, with G, represent that a sub regions intrinsic displacement size surpasses to be matched of T6; If the number of G is more, meet preset requirement, preset requirement surpasses predetermined threshold value T7 as the number of G, or in the number of G and this subregion, the ratio of total SM of to be matched surpasses predetermined threshold value T8; Can determine that the image change in this subregion is larger; Otherwise, if the number of G is less, do not surpass T7, or do not surpass T8 with the ratio of SM, can determine that the image change in this subregion is less;
Further, statistical picture changes the number SN of subregion greatly, if SN is larger, meet pre-conditionedly, as SN surpasses predetermined threshold value T9, or in SN and background identified areas, the ratio of the total SUM of subregion surpasses predetermined threshold value T10, and the variance of the horizontal component of the displacement vector of intra-zone is less than threshold value, and the variance of vertical component is less than threshold value, can determine that the image in background identified areas changes, background changes in other words; Otherwise, if SN is less, do not surpass T9, or do not surpass T10 with the ratio of SUM, can determine that background does not change.
Step 708, proposition information warning, change to warn shooting background.
In practical application, can be arranged on and draw after information warning, upgrade function carry out the operation of upgrading background model by background model, concrete background model is upgraded operation and can be realized based on Density Estimator technology, in the application, does not repeat.
In addition, in practical application, also can be directly by a two field picture of shooting for contrast images, with the background reference image processing that compares, compare emphatically in the source images of background identified areas and contrast images with respective regions that source images occupies in image, and determine according to comparison result whether shooting background has change.
It referring to Figure 10, Figure 10, is the structural representation of image processing equipment in embodiments of the invention.In Figure 10, equipment 1000 comprises:
Processing unit 1001, for determining the background identified areas of background reference image; The imaging region of characteristic body in described background reference image in described background identified areas identification sources background;
Acquiring unit 1002, for utilizing current frame image, the contrast images of background extraction reference picture;
Compare of analysis unit 1003, for by the image in background reference image background identified areas, compares with the image in respective regions in contrast images, obtains comparison result; According to described comparison result, whether the shooting background detecting in current frame image conforms to described source background, if conform to, determines that shooting background does not become; Otherwise, determine alteration of shooting background.
Referring to Figure 11, Figure 11 is another structural representation of image processing equipment in embodiments of the invention.In Figure 11, processing unit 1001 comprises:
Computing unit 1101, for utilizing edge detecting technology, obtains a plurality of pixels that edge intensity value computing in described background reference image meets preset requirement; Calculate all connected regions that described a plurality of pixel can form;
Selected cell 1102, for selecting described background identified areas from all connected regions.
Acquiring unit 1002 comprises:
Choose unit 1103, for select the consecutive frame image of current frame image and this current frame image from the image of shooting;
Calculus of differences unit 1104, for described current frame image and described consecutive frame image are done to calculus of differences, calculate the difference image that is used as described contrast images, described difference image comprises by the described current frame image all pixels formed pixel region of variation different from pixel value on described consecutive frame image.
Compare of analysis unit 1003 comprises:
Common factor computing unit 1105, for calculating described background identified areas in the shared region of described background reference image, with the intersection area in described pixel region of variation shared region in described contrast images;
Whether detecting unit 1106, meet preset requirement for detection of described intersection area area occupied, and described preset requirement comprises: the ratio between described intersection area area occupied and described background identified areas area occupied is no more than default the 3rd threshold value,
Whether analysis and processing unit 1107, for the comparison result obtaining according to detecting unit, analyze described shooting background and change.
Analysis and processing unit 1107 can comprise (not shown in Figure 11):
Matching unit, for according to described comparison result being: described ratio surpasses described the 3rd threshold value, utilize block-matching technique, compare described background reference image and described current frame image, from described current frame image find out with described background identified areas in the matching image that matches of image, image in wherein said background identified areas comprises a plurality of to be matched, and described matching image comprises a plurality of match block;
Displacement computing unit, for obtaining the displacement beacon information between each to be matched position of image in described background identified areas and match block position corresponding in described matching image;
Statistical analysis unit, be used for according to described displacement beacon information, add up in described matching image, to be matched position of the correspondence of comparing, offset deviation surpasses the number of all match block of default the 4th threshold value, if the number of described all match block surpasses default the 5th threshold value, determine that the shooting background in current frame image does not conform to described source background.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (14)

1. a detection method for definite alteration of shooting background, is characterized in that, comprising:
Determine the background identified areas in background reference image; The imaging region of characteristic body in described background reference image in described background identified areas identification sources background;
Utilize current frame image, obtain the contrast images of described background reference image;
By the image in background identified areas in described background reference image, compare with the image in respective regions in described contrast images, obtain comparison result;
According to described comparison result, whether the shooting background detecting in current frame image conforms to described source background, if conform to, determines that shooting background does not become; Otherwise, determine alteration of shooting background.
2. method according to claim 1, is characterized in that, the background identified areas in described definite background reference image comprises:
Utilize edge detecting technology, obtain a plurality of pixels that edge intensity value computing in described background reference image meets preset requirement;
Calculate all connected regions that described a plurality of pixel can form;
From all connected regions, select described background identified areas.
3. method according to claim 2, is characterized in that, the described edge detecting technology of utilizing is extracted edge intensity value computing and met pre-conditioned a plurality of pixels and comprise from described background reference image:
Calculate the edge intensity value computing of each pixel on described background reference image;
The edge intensity value computing of described each pixel is done to filtering and process, obtain the renewal edge intensity value computing of each pixel;
Select and upgrade the pixel that edge intensity value computing is greater than preset first threshold value, as described a plurality of pixels.
4. method according to claim 3, is characterized in that, the described edge intensity value computing to described each pixel is done filtering and processed, and the renewal edge intensity value computing that obtains each pixel comprises:
According to preset rules, the edge intensity value computing of choosing specified pixel point in default spectral window is upgraded the edge intensity value computing of described pixel, obtains described renewal edge intensity value computing;
Described spectral window is centered by described pixel and have default big or small.
5. method according to claim 2, is characterized in that, describedly from all connected regions, selects described background identified areas and comprises:
Select the connected region that area is greater than default Second Threshold, as described background identified areas.
6. method according to claim 1, is characterized in that, the described current frame image that utilizes, and the contrast images of obtaining described background reference image comprises:
Consecutive frame image to described current frame image and this current frame image is done calculus of differences, obtains difference image; Described difference image comprises by the described current frame image all pixels formed pixel region of variation different from pixel value on described consecutive frame image;
Described difference image is used as to described contrast images.
7. method according to claim 6, is characterized in that, described by the image in background identified areas in described background reference image, compares and comprises with image in respective regions in described contrast images:
Calculate described background identified areas shared region in described background reference image, with the intersection area in described pixel region of variation shared region in described contrast images;
Detect described intersection area area occupied and whether meet preset requirement, described preset requirement comprises: the ratio between described intersection area area occupied and described background identified areas area occupied is no more than default the 3rd threshold value.
8. method according to claim 7, is characterized in that,
Describedly show that comparison result comprises: described ratio is no more than described the 3rd threshold value;
Described according to described comparison result, whether the shooting background detecting in current frame image conforms to and comprises with described source background: detect described shooting background and conform to described source background; Or,
Describedly show that comparison result comprises: described ratio surpasses described the 3rd threshold value;
Described according to described comparison result, whether the shooting background detecting in current frame image conforms to and comprises with described source background: detect described shooting background and do not conform to described source background.
9. method according to claim 7, is characterized in that, described according to described comparison result, and whether the shooting background detecting in current frame image conforms to and comprise with described source background:
If described comparison result is: described ratio surpasses described the 3rd threshold value, utilize block-matching technique, compare described background reference image and described current frame image, from described current frame image find out with described background identified areas in the matching image that matches of image, image in wherein said background identified areas comprises a plurality of to be matched, and described matching image comprises a plurality of match block;
Obtain the displacement beacon information between each to be matched position in the image in described background identified areas and match block position corresponding in described matching image;
According to described displacement beacon information, add up in described matching image, to be matched position of the correspondence of comparing, offset deviation surpasses the number of all match block of default the 4th threshold value, if the number of described all match block surpasses default the 5th threshold value, determine that the shooting background in current frame image does not conform to described source background.
10. an image processing equipment, is characterized in that, comprising:
Processing unit, for determining the background identified areas of background reference image; The imaging region of characteristic body in described background reference image in described background identified areas identification sources background;
Acquiring unit, for utilizing current frame image, obtains the contrast images of described background reference image;
Compare of analysis unit, for by the image in described background reference image background identified areas, compares with the image in respective regions in described contrast images, obtains comparison result; According to described comparison result, whether the shooting background detecting in current frame image conforms to described source background, if conform to, determines that shooting background does not become; Otherwise, determine alteration of shooting background.
11. equipment according to claim 10, is characterized in that, described processing unit comprises:
Computing unit, for utilizing edge detecting technology, obtains a plurality of pixels that edge intensity value computing in described background reference image meets preset requirement; Calculate all connected regions that described a plurality of pixel can form;
Selected cell, for selecting described background identified areas from all connected regions.
12. equipment according to claim 10, is characterized in that, described acquiring unit comprises:
Choose unit, for select the consecutive frame image of current frame image and this current frame image from the image of shooting;
Calculus of differences unit, for described current frame image and described consecutive frame image are done to calculus of differences, calculate the difference image that is used as described contrast images, described difference image comprises by the described current frame image all pixels formed pixel region of variation different from pixel value on described consecutive frame image.
13. equipment according to claim 12, is characterized in that, described compare of analysis unit comprises:
Common factor computing unit, for calculating described background identified areas in the shared region of described background reference image, with the intersection area in described pixel region of variation shared region in described contrast images;
Whether detecting unit, meet preset requirement for detection of described intersection area area occupied, and described preset requirement comprises: the ratio between described intersection area area occupied and described background identified areas area occupied is no more than default the 3rd threshold value,
Whether analysis and processing unit, for the comparison result obtaining according to detecting unit, analyze described shooting background and change.
14. equipment according to claim 13, is characterized in that, described analysis and processing unit comprises:
Matching unit, for according to described comparison result being: described ratio surpasses described the 3rd threshold value, utilize block-matching technique, compare described background reference image and described current frame image, from described current frame image find out with described background identified areas in the matching image that matches of image, image in wherein said background identified areas comprises a plurality of to be matched, and described matching image comprises a plurality of match block;
Displacement computing unit, for obtaining the displacement beacon information between each to be matched position of image in described background identified areas and match block position corresponding in described matching image;
Statistical analysis unit, be used for according to described displacement beacon information, add up in described matching image, to be matched position of the correspondence of comparing, offset deviation surpasses the number of all match block of default the 4th threshold value, if the number of described all match block surpasses default the 5th threshold value, determine that the shooting background in current frame image does not conform to described source background.
CN200910086964.1A 2009-06-11 2009-06-11 Detection method for determining alteration of shooting background and image processing device Active CN101599175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910086964.1A CN101599175B (en) 2009-06-11 2009-06-11 Detection method for determining alteration of shooting background and image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910086964.1A CN101599175B (en) 2009-06-11 2009-06-11 Detection method for determining alteration of shooting background and image processing device

Publications (2)

Publication Number Publication Date
CN101599175A CN101599175A (en) 2009-12-09
CN101599175B true CN101599175B (en) 2014-04-23

Family

ID=41420608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910086964.1A Active CN101599175B (en) 2009-06-11 2009-06-11 Detection method for determining alteration of shooting background and image processing device

Country Status (1)

Country Link
CN (1) CN101599175B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254396B (en) * 2011-07-06 2014-06-04 通号通信信息集团有限公司 Intrusion detection method and device based on video
CN103090796B (en) * 2011-11-01 2017-03-15 北京航天发射技术研究所 Rocket beat, the measuring system of sedimentation and method
CN107992198B (en) * 2013-02-06 2021-01-05 原相科技股份有限公司 Optical pointing system
CN103327249B (en) * 2013-06-19 2016-12-28 小米科技有限责任公司 Photographic method, device and equipment
CN104475344B (en) * 2014-11-04 2017-02-15 上海维宏电子科技股份有限公司 Method for realizing sorting of textile bobbins based on machine vision
CN104751444A (en) * 2014-12-24 2015-07-01 张�林 Method and device for screening significant differences from similar images
CN104657997B (en) * 2015-02-28 2018-01-09 北京格灵深瞳信息技术有限公司 A kind of lens shift detection method and device
CN107784844B (en) * 2016-08-31 2021-05-14 百度在线网络技术(北京)有限公司 Intelligent traffic signal lamp system and road environment detection method thereof
CN107404419B (en) * 2017-08-01 2020-09-01 南京华苏科技有限公司 Network coverage performance test anti-false test method and device based on picture or video
CN110674676B (en) * 2019-08-02 2022-03-29 杭州电子科技大学 Road confidence estimation fuzzy frame method based on semantic segmentation
CN111783771B (en) * 2020-06-12 2024-03-19 北京达佳互联信息技术有限公司 Text detection method, text detection device, electronic equipment and storage medium
CN112070113B (en) * 2020-07-28 2024-03-26 北京旷视科技有限公司 Imaging scene change judging method and device, electronic equipment and readable storage medium
CN113344496A (en) * 2021-06-16 2021-09-03 国家珠宝检测中心(广东)有限责任公司 Method and system for multi-strategy jewelry identification
CN113936242B (en) * 2021-12-14 2022-03-11 苏州浪潮智能科技有限公司 Video image interference detection method, system, device and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1976394A (en) * 2006-12-07 2007-06-06 浙江大学 Scene change real-time detecting method based on compression field
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101369346A (en) * 2007-08-13 2009-02-18 北京航空航天大学 Tracing method for video movement objective self-adapting window

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100481538B1 (en) * 2003-05-19 2005-04-07 삼성전자주식회사 Apparatus and method for detecting change in background area
US7280673B2 (en) * 2003-10-10 2007-10-09 Intellivid Corporation System and method for searching for changes in surveillance video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1976394A (en) * 2006-12-07 2007-06-06 浙江大学 Scene change real-time detecting method based on compression field
CN101369346A (en) * 2007-08-13 2009-02-18 北京航空航天大学 Tracing method for video movement objective self-adapting window
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于小波包的图像拼接算法;王文波等;《激光与红外》;20060331;第36卷(第3期);238-240 *
王文波等.基于小波包的图像拼接算法.《激光与红外》.2006,第36卷(第3期),238-240.

Also Published As

Publication number Publication date
CN101599175A (en) 2009-12-09

Similar Documents

Publication Publication Date Title
CN101599175B (en) Detection method for determining alteration of shooting background and image processing device
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
EP3358298B1 (en) Building height calculation method and apparatus, and storage medium
CA2867365C (en) Method, system and computer storage medium for face detection
CN105608455B (en) A kind of license plate sloped correcting method and device
EP2154630A1 (en) Image identification method and imaging apparatus
CN111179232A (en) Steel bar size detection system and method based on image processing
CN109413411B (en) Black screen identification method and device of monitoring line and server
EP2124194A1 (en) Method of detecting objects
CN112863194B (en) Image processing method, device, terminal and medium
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
JP3860540B2 (en) Entropy filter and region extraction method using the filter
KR101381580B1 (en) Method and system for detecting position of vehicle in image of influenced various illumination environment
CN116310889A (en) Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium
JP2007219899A (en) Personal identification device, personal identification method, and personal identification program
CN106920398A (en) A kind of intelligent vehicle license plate recognition system
CN114359166A (en) Screen color detection method, device, equipment and storage medium
JP2004208209A (en) Device and method for monitoring moving body
Bachtiar et al. Parking management by means of computer vision
JP2001126192A (en) Method for judging parked or vacancy state in parking space of parking lot
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
JP2000023142A (en) Picture monitoring device
JP3198258B2 (en) Parked vehicle detection method and device
CN103886554A (en) Positioning method for automatically recognizing codes in system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20171219

Address after: 519031 Guangdong city of Zhuhai province Hengqin Baohua Road No. 6, room 105, -23898 (central office)

Patentee after: Zhongxing Technology Co., Ltd.

Address before: 100083, Haidian District, Xueyuan Road, Beijing No. 35, Nanjing Ning building, 15 Floor

Patentee before: Beijing Vimicro Corporation

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 519031 -23898, 105 room 6, Baohua Road, Hengqin New District, Zhuhai, Guangdong (centralized office area)

Patentee after: Mid Star Technology Limited by Share Ltd

Address before: 519031 -23898, 105 room 6, Baohua Road, Hengqin New District, Zhuhai, Guangdong (centralized office area)

Patentee before: Zhongxing Technology Co., Ltd.