WO2018068300A1 - Procédé et dispositif de traitement d'image - Google Patents

Procédé et dispositif de traitement d'image Download PDF

Info

Publication number
WO2018068300A1
WO2018068300A1 PCT/CN2016/102120 CN2016102120W WO2018068300A1 WO 2018068300 A1 WO2018068300 A1 WO 2018068300A1 CN 2016102120 W CN2016102120 W CN 2016102120W WO 2018068300 A1 WO2018068300 A1 WO 2018068300A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
input image
image
sensitivity
threshold
Prior art date
Application number
PCT/CN2016/102120
Other languages
English (en)
Chinese (zh)
Inventor
张楠
王琪
Original Assignee
富士通株式会社
张楠
王琪
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社, 张楠, 王琪 filed Critical 富士通株式会社
Priority to JP2019518041A priority Critical patent/JP6835215B2/ja
Priority to PCT/CN2016/102120 priority patent/WO2018068300A1/fr
Priority to CN201680088030.XA priority patent/CN109478329B/zh
Publication of WO2018068300A1 publication Critical patent/WO2018068300A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
  • the pixel-based time difference is used between two adjacent frames of the image sequence, and the background and the foreground are distinguished by judging whether it is greater than the threshold.
  • the algorithm is simple to implement and is insensitive to illumination changes, but cannot be complicated.
  • the single Gaussian model and the Gaussian mixture model are used for detection, a corresponding Gaussian distribution model is established for each pixel in the image, and the background and foreground are distinguished by judging whether the value obtained by the model is greater than a threshold, but the single Gaussian model has When noise is disturbed, the extraction accuracy is low, while the Gaussian mixture model has a large amount of calculation and is sensitive to illumination changes;
  • each CodeBook structure is created for each pixel of the current image, and each CodeBook structure is composed of multiple codewords (CodeWord), for each pixel in the image, traversing the corresponding background model CodeBook
  • CodeWord codewords
  • the commonly used background model construction and foreground image extraction methods also include a visual background extractor (VIBE) algorithm, which utilizes a single The frame image initializes the background model. For a pixel, combined with the spatial distribution characteristics of adjacent pixels having adjacent pixel values, the pixel value of its adjacent domain pixel is randomly selected as the background model sample value.
  • the algorithm and other The main difference of the algorithm is the updating strategy of the background model, that is, randomly selecting the samples of the pixels to be replaced, and randomly selecting the neighboring pixels to update the background model.
  • the algorithm is fast, the calculation is small, and the noise is certain. Robustness, but in real-time scenes, if there are scene changes such as rain, fog, or cloudy, the image will become blurred and the full foreground image will not be extracted.
  • the embodiment of the invention provides an image processing method and device, which can adjust the sensitivity according to the background model in the VIBE algorithm and adjust the sensitivity according to the sharpness of the image, thereby improving the accuracy of the foreground detection and avoiding image change.
  • an image processing apparatus comprising:
  • a first calculating unit configured to calculate a sensitivity of the background model according to the sharpness of the input image detecting area, wherein the sensitivity is negatively correlated with the sharpness when the sharpness is within the first predetermined range;
  • a first processing unit configured to process each pixel in the input image to detect a foreground image of the input image; wherein, when processing one pixel, comprising: calculating the one pixel and the background model a distance between each sample point in the background sample set corresponding to one pixel, and determining the one pixel as the distance when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold Input the background pixel in the image, otherwise determined as the foreground pixel;
  • a first processing unit configured to process each pixel in the input image to update the background model; wherein, when processing one pixel, comprising: calculating the one pixel and in the background model Corresponding to the distance between each sample point in the background sample set of one pixel, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, the one pixel is replaced by one sample point To update the background model;
  • the first threshold is determined according to the sensitivity, and the sensitivity is negatively linearly related to the first threshold.
  • Processing each pixel in the input image to detect a foreground image of the input image comprising: calculating the one pixel and a background sample corresponding to one pixel in the background model a distance between each sample point of the set, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, determining the one pixel as a background pixel in the input image, otherwise Determined as foreground pixels;
  • Processing each pixel in the input image to update the background model comprising: calculating the one pixel and each of the background sample sets corresponding to one pixel in the background model a distance between the sample points, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, replacing the one pixel with one sample point to update the background model;
  • the first threshold is determined according to the sensitivity, and the sensitivity is negatively linearly related to the first threshold.
  • the beneficial effects of the embodiment of the present invention are that, by using the image processing method and apparatus of the embodiment, the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, thereby improving the accuracy of the foreground detection. Degree, avoiding the problem that the image becomes blurred and cannot extract the complete foreground image.
  • 1 is a flow chart of an image processing method in the first embodiment
  • FIG. 3 is a flow chart of an image processing method in the third embodiment
  • 4A-4D are schematic views of different sharpness images in the third embodiment.
  • 5A-5C are schematic diagrams showing foreground images detected by different sensitivities in the third embodiment.
  • Figure 6 is a flow chart of the method of step 305 in the second embodiment
  • FIG. 7 is a flowchart of an image processing method in the third embodiment
  • Figure 8 is a flow chart of the method of step 705 in the third embodiment
  • Figure 9 is a block diagram showing the structure of an image processing apparatus in the fourth embodiment.
  • Figure 10 is a block diagram showing the structure of an image processing apparatus in the fourth embodiment.
  • Figure 11 is a block diagram showing the hardware configuration of the image processing apparatus in the fourth embodiment.
  • Figure 12 is a block diagram showing the configuration of an image processing apparatus in the fourth embodiment
  • Fig. 13 is a block diagram showing the hardware configuration of the image processing apparatus in the fourth embodiment.
  • FIG. 1 is a flowchart of the image processing method. As shown in FIG. 1, the method includes:
  • Step 101 Calculate the sensitivity of the background model according to the sharpness of the input image detection area, and the definition is The sensitivity is negatively correlated with the sharpness when the first predetermined range is within;
  • Step 102 processing each pixel in the input image to detect a foreground image of the input image; wherein, when processing one pixel, the method comprises: calculating the one pixel and corresponding to one pixel in the background model The distance between each sample point in the background sample set, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, determining the one pixel as the background in the input image a pixel, otherwise determined as a foreground pixel; wherein the first threshold is determined based on the sensitivity, the sensitivity being negatively linearly related to the first threshold.
  • the pixels of the object that are still or move very slowly in the image constitute a background image
  • the pixels of the moving object constitute a foreground image
  • the detection of the foreground image is equivalent to a classification problem, that is, to determine each of the images. Whether the pixels belong to the background pixel or the foreground pixel.
  • the background model includes a background sample set corresponding to each pixel, wherein the background model can be initialized by using a single frame image, and specifically, for one pixel, the spatial distribution of similar pixel values is combined with adjacent pixels.
  • Characteristic randomly select the pixel value of its neighborhood point as the background sample set, and then compare each pixel value with its corresponding background sample set to determine whether it belongs to the background pixel.
  • the number of sample points in the background sample set is a predetermined number N.
  • v(x) represents the pixel value of pixel x
  • S R (v(x)) represents x as the center, A region where the threshold R is a radius, if #[ ⁇ S R (v(x)) ⁇ ⁇ V1, V2, ... VN ⁇ ] is greater than or equal to the first predetermined number #min, the pixel x belongs to the background pixel.
  • the first threshold R generally takes a default value of 20. Therefore, when the image becomes blurred, if the first threshold R remains unchanged, it will result in the inability to extract a complete foreground image.
  • the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, and the first threshold R in the background model is adjusted according to the value of the sensitivity, Therefore, the accuracy of the foreground detection can be improved, and the problem that the image becomes blurred and the complete foreground image cannot be extracted can be avoided.
  • step 101 when the resolution is within the first predetermined range, the sensitivity is negatively correlated with the sharpness, that is, the higher the sharpness of the input image detection area, the lower the sensitivity, and conversely, the sharpness of the input image detection area.
  • the lower the sensitivity the higher the sensitivity.
  • the sensitivity can be set to the lowest set value, and when the sharpness is less than the sixth threshold c, the image is blurred, and at this time, Setting the sensitivity to the highest set value, and when the sharpness is in the first predetermined range ([c, d]), the sensitivity is negatively correlated with the sharpness;
  • 0-100 (%) can be used to indicate the range of sensitivity, and the magnitude of the sensitivity is expressed by the magnitude of the value, such as 0 indicating low sensitivity and 100 (%) indicating high sensitivity.
  • the negative correlation between this sensitivity and the definition is: Wherein, C represents the sharpness, S represents the sensitivity, and the first predetermined range is [c, d]; the negative correlation between the sensitivity and the sharpness may also be other negative correlation functions, which is not used in this embodiment. limit.
  • each pixel in the input image is processed to determine whether each pixel belongs to a foreground pixel or a background pixel, and an image determined as a pixel of the foreground pixel is determined as a foreground image, the first in the background model.
  • the threshold may be determined according to the sensitivity, and the sensitivity is negatively correlated with the first threshold, that is, the higher the sensitivity, the smaller the first threshold, the lower the sensitivity, the larger the first threshold, and therefore, the more blurred the input image, the smaller the first threshold Conversely, the clearer the input image, the larger the first threshold.
  • the sensitivity is negatively linearly related to the first threshold.
  • the negative linear correlation between the first threshold and the sensitivity is:
  • R represents the first threshold
  • S represents the sensitivity
  • the range of the first threshold is [a, b].
  • the negative linear correlation between the first threshold R and the sensitivity S is:
  • the range of values of R and S is merely exemplified herein, but the embodiment is not limited thereto.
  • the method may further include:
  • Step 100 (optional), calculating a sharpness of the input image detection area
  • the sharpness can be expressed by the average of the gradient magnitudes of the pixels in the detection area, for example, calculating the ratio of the sum of the gradient magnitudes of each pixel in the input image detection area to the number of pixels in the input image detection area, This ratio is taken as the sharpness.
  • the resolution can be calculated using the following formula (1):
  • w is the width of the input image detection area
  • h is the height of the input image detection area
  • pixel_num is the number of pixels in the input image detection area
  • I is the pixel value
  • i and j are the pixel horizontal and vertical coordinates
  • a Region of Interest (ROI) in the input image may be selected in step 100, and the region of interest is used as the detection region.
  • the ROI is represented by a binary, wherein the pixel value of the pixel in the ROI is 1 and the pixel value of the remaining pixels is 0.
  • the ROI may be selected.
  • the ROI may be selected in advance, or may be selected each time the input image is processed. This embodiment is not limited thereto.
  • the change of the scene usually does not occur suddenly, that is, the sharpness of the adjacent image frames is similar.
  • the definition of the image may be recalculated every predetermined time, and according to the update.
  • the sharpness updates the sensitivity of the background model so in this embodiment, the method may also include (not shown):
  • the input image is selected in at least one frame of image sequence, wherein one frame of image is selected as the input image every second predetermined number of frames.
  • the method may further include (not shown): updating the background model due to factors such as changes in illumination, changes in the background image, and the like.
  • updating the background model includes: updating sample points in the background sample set in the background model, and/or updating the first threshold or the like according to the sensitivity of the background model obtained in step 101 above.
  • the method for updating the sample points in the background sample set in the background model may use the background model update strategy of the VIBE algorithm in the prior art, such as a memoryless update strategy, a time sampling update strategy, a spatial neighborhood update strategy, etc., wherein in step 102, And determining that the one pixel is a background pixel, the pixel may be randomly replaced with one sample point in the corresponding background sample set to update the background model, but the embodiment is not limited thereto, for example,
  • the prior art processes other images in the image sequence other than the selected input image to update the background model. For details, reference may be made to the prior art.
  • the background model can be updated, and the foreground image is extracted using the updated background model for the image to be processed in the next frame.
  • the input image can be obtained according to an existing method.
  • the input image can be the current frame in the surveillance video.
  • the surveillance video can be obtained by installing a camera above the area that needs to be monitored.
  • the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, thereby improving the accuracy of the foreground detection, avoiding blurring of the image and not extracting the complete foreground image. problem.
  • the second embodiment provides an image processing method, which is different from the embodiment 1 in that, in the embodiment, each pixel in the input image is processed to update the background model; The content of 1 is the same, and the description is not repeated;
  • FIG. 2 is a flowchart of the image processing method, as shown in FIG. 2, the method includes:
  • Step 201 Calculate the sensitivity of the background model according to the sharpness of the input image detection area, and the sensitivity is negatively correlated with the definition when the resolution is within the first predetermined range;
  • Step 202 Process each pixel in the input image to update the background model; wherein, when processing a pixel, comprising: calculating the one pixel and a background sample corresponding to one pixel in the background model a distance between each sample point of the set, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, replacing the one pixel with one sample point to update the background model;
  • the first threshold is determined based on the sensitivity, the sensitivity being negatively linearly related to the first threshold.
  • step 201 in this embodiment refers to step 101 in Embodiment 1, which is not repeated here.
  • the manner of determining the first threshold in step 202 can be referred to Embodiment 1, and is not repeated here.
  • the method may further include:
  • Step 200 (optional), the resolution of the input image detection area is calculated, and the implementation manner is the same as that of step 100 of Embodiment 1, and details are not described herein again.
  • step 202 The differences from the first embodiment in step 202 will be described below.
  • each pixel in the input image is processed to update the background model, and each pixel value is compared to its corresponding background sample set to determine whether it belongs to a background pixel.
  • the number of sample points in the background sample set is a predetermined number N.
  • v(x) represents the pixel value of pixel x
  • S R (v(x)) represents x as the center, A region where the threshold R is a radius, if #[ ⁇ S R (v(x)) ⁇ ⁇ V1, V2, ...
  • VN ⁇ ] is greater than or equal to the first predetermined number #min, the pixel x is replaced by a sample point to update The background model, wherein the pixel x can be randomly replaced with one sample point in its corresponding background sample set to update the background model and update the first threshold.
  • the method may further include:
  • Step 203 Extract a foreground image from the input image according to the updated background model.
  • the step 203 can be implemented by using the prior art, and the embodiment is not limited thereto.
  • the change of the scene usually does not occur suddenly, that is, the sharpness of the adjacent image frames is similar.
  • the definition of the image may be recalculated every predetermined time, and according to the update.
  • the sharpness updates the sensitivity of the background model so in this embodiment, the method may also include (not shown):
  • the input image is selected in at least one frame of image sequence, wherein one frame of image is selected as the input image every second predetermined number of frames.
  • the background model updated in step 202 is used, the background model is updated again, and the foreground image in the image to be processed in the next frame is extracted according to the background model updated again.
  • the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, thereby improving the accuracy of the foreground detection, avoiding blurring of the image and not extracting the complete foreground image. problem.
  • FIG. 3 is a flowchart of the image processing method.
  • the method includes:
  • Step 301 determining a current image
  • the current image is the ith frame in the sequence of images.
  • the area formed by all the pixels of the current image may be used as the detection area, or the Region of Interest (ROI) set by the user may be used as the detection area.
  • ROI Region of Interest
  • step 302 it may also be determined whether the region of interest needs to be changed, and if necessary, the region of interest is re-determined as the detection region, and step 303 is performed; otherwise, step 303 is directly performed.
  • Step 303 calculating a sharpness of the detection area
  • step 100 For the specific calculation method of the resolution, refer to step 100, and details are not described herein again.
  • FIGS. 4A-4D are schematic views of different sharpness images.
  • the sharpness of FIGS. 4A, 4B, 4C, and 4D is calculated according to the formula (1) of Embodiment 1 to be 20.036, 20.857, 8.06, 12.895, respectively.
  • Sharpness The higher the value, the clearer the image. Conversely, the smaller the value of the sharpness, the more blurred the image.
  • Figures 4C and 4D are affected by weather and illumination, and the sharpness is low.
  • Step 304 Calculate the sensitivity of the background model according to the definition
  • step 101 For the specific calculation method, refer to step 101, and details are not described herein again.
  • Step 305 processing, for each pixel in the input image, to detect a foreground image in the input image
  • step 305 The specific implementation of the step 305 can be referred to FIG. 6 and will not be described here.
  • Step 306 the background model is updated; for the specific update method, refer to Embodiment 1, and details are not described herein again.
  • the background model is updated using the prior art; in addition, it is judged whether the image sequence is stopped, and if so, the operation is ended.
  • step 305 is a flow chart of the method of step 305; as shown in FIG. 6, the method includes:
  • Step 601 selecting a pixel from the input image detection area
  • the one pixel may be selected from left to right in a pixel arrangement from top to bottom.
  • Step 602 Calculate a distance between the one pixel and each sample point in the background sample set corresponding to one pixel in the background model, where a distance between the one pixel and at least the first predetermined number of sample points is less than or equal to a predetermined one.
  • the first threshold is determined as the background pixel in the input image, otherwise determined as the foreground pixel; determining whether each pixel belongs to the foreground pixel or the background pixel, and determining an image determined as the pixel of the foreground pixel as the foreground image
  • the first threshold is determined according to the sensitivity, and the sensitivity is negatively linearly related to the first threshold.
  • the specific implementation of the step 602 can refer to step 102, and is not repeated here.
  • FIG. 7 is a flowchart of the image processing method, for monitoring video (image sequence), as shown in FIG. 7, the method includes:
  • Step 701 determining a current image
  • Step 702 determining a detection area
  • Step 703 calculating a sharpness of the detection area
  • Step 704 calculating a sensitivity of the background model according to the definition
  • Step 705 processing, for each pixel in the input image, to update the background model
  • Step 706 extracting a foreground image from the input image according to the updated background model
  • the image updates the background model using the prior art; in addition, it is determined whether the image sequence is stopped, and if so, the operation is ended.
  • Figure 8 is a flow chart of the method of step 706; as shown in Figure 8, the method includes:
  • Step 801 selecting a pixel from the input image detection area
  • the one pixel may be selected from left to right in a pixel arrangement from top to bottom.
  • Step 802 Calculate a distance between the one pixel and each sample point in the background sample set corresponding to one pixel in the background model, where a distance between the one pixel and at least the first predetermined number of sample points is less than or equal to a predetermined one.
  • the first threshold is used, the one pixel is replaced by one sample point to update the background model.
  • the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, thereby improving the accuracy of the foreground detection, avoiding blurring of the image and not extracting the complete foreground image. problem.
  • the fourth embodiment further provides an image processing apparatus. Since the principle of solving the problem is similar to the method in the embodiment 1-3, the specific implementation may refer to the implementation of the method in the embodiment 1-3. In the same way, the description will not be repeated.
  • FIG. 9 is a block diagram showing the structure of an image processing apparatus in the fourth embodiment. As shown in Figure 9, the image processing apparatus 900 includes:
  • a first calculating unit 901 configured to calculate a sensitivity of the background model according to the sharpness of the input image detecting area, where the sensitivity is negatively correlated with the sharpness when the sharpness is within the first predetermined range;
  • a first processing unit 902 configured to process each pixel in the input image to detect a foreground image of the input image; wherein, when processing a pixel, the method includes: calculating the one pixel and the background a distance between each sample point in the background sample set corresponding to one pixel in the model, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, determining the one pixel as a background pixel in the input image, otherwise determined as a foreground pixel; or it is used to process each pixel in the input image to update the background model; wherein, when processing one pixel, including: calculating a distance between the one pixel and each sample point in the background sample set corresponding to one pixel in the background model, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold Replacing the one pixel with a sample point to update the background model; wherein the first threshold is determined according to the
  • the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, thereby improving the accuracy of the foreground detection, avoiding blurring of the image and not extracting the complete foreground image. problem.
  • the image processing apparatus 1000 includes: a first calculating unit 1001 that calculates the sensitivity of the background model based on the sharpness of the input image detecting area, the sharpness The sensitivity is negatively correlated with the sharpness when in the first predetermined range; the first processing unit 1002 processes each pixel in the input image to detect a foreground image of the input image; wherein, in one The processing of the pixel includes: calculating a distance between the one pixel and each sample point in the background sample set corresponding to one pixel in the background model, the distance between the one pixel and at least the first predetermined number of sample points When the first threshold is less than or equal to a predetermined threshold, the one pixel is determined as the background pixel in the input image, otherwise determined as the foreground pixel; wherein the first threshold is determined according to the sensitivity, and the sensitivity is negatively linearly related to the first threshold .
  • the specific implementation manners of the first computing unit 1001 and the first processing unit 1002 may refer to steps 101-102 in Embodiment 1, and details are not described herein again.
  • the negative linear correlation between the first threshold and the sensitivity is: Where R represents the first threshold and S represents the sensitivity, and the range of the first threshold is [a, b].
  • the apparatus 1000 may further include: a second calculating unit 1003, configured to calculate a sharpness of the input image detecting area, and use an average value of the gradient amplitude of the pixels in the detecting area as the sharpness.
  • a second calculating unit 1003 configured to calculate a sharpness of the input image detecting area, and use an average value of the gradient amplitude of the pixels in the detecting area as the sharpness.
  • the second calculating unit 1003 calculates a ratio of the sum of the gradient magnitudes of each pixel in the input image detecting area to the number of pixels in the input image detecting area, and uses the ratio as the sharpness; or the second The calculation unit 1003 can calculate the resolution according to the following formula:
  • w is the width of the input image detection area
  • h is the height of the input image detection area
  • pixel_num is the number of pixels in the input image detection area
  • I is the pixel value
  • i and j are the pixel point horizontal and vertical coordinates.
  • the apparatus 1000 may further include:
  • the first selecting unit 1004 is configured to select the input image from the at least one frame image sequence, wherein the first selecting unit 1004 selects one frame image as the input image every second predetermined number of frames.
  • the device may further include:
  • the second selection unit 1005 is configured to select a region of interest in the input image, and use the region of interest as the detection region.
  • the apparatus 1000 may further include: an updating unit 1006, configured to update the background model, wherein the updating unit 1006 may update the background model according to the processing result of the first processing unit 1002; and/or according to the The processing result of the first processing unit 1002 is updated to the first threshold.
  • an updating unit 1006 configured to update the background model, wherein the updating unit 1006 may update the background model according to the processing result of the first processing unit 1002; and/or according to the The processing result of the first processing unit 1002 is updated to the first threshold.
  • the image processing apparatus 1100 may include: an interface (not shown), a central processing unit (CPU) 1120, and a memory 1110. And a transceiver 1140; the memory 1110 is coupled to the central processor 1120.
  • the memory 1110 can store various data; in addition, a program for image processing is stored, and the program is executed under the control of the central processing unit 1120, and various preset values, predetermined conditions, and the like are stored.
  • the functionality of image processing device 1100 can be integrated into central processor 1120.
  • the central processing unit 1120 may be configured to: calculate a sensitivity of the background model according to the sharpness of the input image detection area, where the sensitivity is negatively correlated with the definition when the resolution is within the first predetermined range;
  • Each pixel is processed to detect a foreground image of the input image; wherein, when processing a pixel, the method comprises: calculating the one pixel and each sample in a background sample set corresponding to one pixel in the background model a distance between the points, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, determining the one pixel as a background pixel in the input image, otherwise determining as a foreground pixel; Wherein the first threshold is determined according to the sensitivity, and the sensitivity is negatively linearly related to the first threshold.
  • the central processing unit 1120 may be configured to select the input image from at least one frame of image sequence, wherein the first selection unit selects one frame image as the input image every second predetermined number of frames.
  • the central processing unit 1120 may be configured to calculate a sharpness of the input image detection area, and use an average value of the gradient magnitudes of the pixels in the detection area as the sharpness.
  • the central processing unit 1120 may be configured to calculate a ratio of a sum of gradient magnitudes of each pixel in the input image detection area to a number of pixels in the input image detection area, and use the ratio as the sharpness.
  • the central processing unit 1120 can be configured to: calculate the resolution according to the following formula:
  • w is the width of the input image detection area
  • h is the height of the input image detection area
  • pixel_num is the number of pixels in the input image detection area
  • I is the pixel value
  • i and j are the pixel horizontal and vertical coordinates.
  • the central processing unit 1120 may be configured to select a region of interest in the input image, and use the region of interest as the detection region.
  • the negative linear correlation between the first threshold and the sensitivity is: Where R represents the first threshold and S represents the sensitivity, and the range of the first threshold is [a, b].
  • the central processing unit 1120 can be configured to: update the background model, according to the input map The processing result of the image updates the background model; and/or updates the first threshold based on the processing result of the input image.
  • Embodiment 1 For a specific implementation of the central processing unit 1120, reference may be made to Embodiment 1 and is not repeated here.
  • the image processing apparatus 1100 may be disposed on a chip (not shown) connected to the central processing unit 1120, and the functions of the image processing apparatus 1100 may be implemented by control of the central processing unit 1120.
  • the image processing apparatus 1100 does not necessarily have to include all the components shown in FIG. 11; in addition, the image processing apparatus 1100 may further include components not shown in FIG. 11, and reference may be made to the related art.
  • the image processing apparatus 1200 includes: a first calculating unit 1201 that calculates the sensitivity of the background model based on the sharpness of the input image detecting area, the sharpness The sensitivity is negatively correlated with the sharpness when in the first predetermined range; the first processing unit 1202 processes each pixel in the input image to update the background model; wherein, processing one pixel And including: calculating a distance between the one pixel and each sample point in the background sample set corresponding to one pixel in the background model, where a distance between the one pixel and at least the first predetermined number of sample points is less than or equal to a predetermined The first threshold is replaced by a sample point to update the background model; wherein the first threshold is determined according to the sensitivity, the sensitivity being negatively linearly related to the first threshold.
  • the specific implementation manners of the first calculating unit 1201 and the first processing unit 1202 may refer to steps 201-202 in Embodiment 2, and details are not described herein again.
  • the negative linear correlation between the first threshold and the sensitivity is: Where R represents the first threshold and S represents the sensitivity, and the range of the first threshold is [a, b].
  • the apparatus 1200 may further include: a second calculating unit 1203, configured to calculate a sharpness of the input image detecting area, and use an average value of the gradient amplitude of the pixels in the detecting area as the sharpness.
  • a second calculating unit 1203 configured to calculate a sharpness of the input image detecting area, and use an average value of the gradient amplitude of the pixels in the detecting area as the sharpness.
  • the second calculating unit 1203 calculates a ratio of the sum of the gradient magnitudes of each pixel in the input image detecting area to the number of pixels in the input image detecting area, and uses the ratio as the sharpness; or the second The calculation unit 1203 can calculate the resolution according to the following formula:
  • w is the width of the input image detection area
  • h is the height of the input image detection area
  • pixel_num is the number of pixels in the input image detection area
  • I is the pixel value
  • i and j are the pixel point horizontal and vertical coordinates.
  • the apparatus 1200 may further include:
  • the apparatus 1200 may further include:
  • the second selection unit 1205 is configured to select a region of interest in the input image, and use the region of interest as the detection region.
  • the apparatus 1200 may further include: an extracting unit 1206, configured to extract a foreground image from the input image according to the updated background model.
  • an extracting unit 1206, configured to extract a foreground image from the input image according to the updated background model.
  • FIG. 13 is a schematic diagram showing the hardware configuration of an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus 1300 may include: an interface (not shown), a central processing unit (CPU) 1320, a memory 1310, and a transceiver.
  • the memory 1310 is coupled to the central processor 1320.
  • the memory 1310 can store various data; in addition, a program for image processing is stored, and the program is executed under the control of the central processing unit 1320, and various preset values, predetermined conditions, and the like are stored.
  • the functionality of image processing device 1300 can be integrated into central processor 1320.
  • the central processing unit 1320 may be configured to: calculate a sensitivity of the background model according to the sharpness of the input image detection area, where the sensitivity is negatively correlated with the definition when the resolution is within the first predetermined range; Processing each pixel to update the background model; wherein, when processing a pixel, comprising: calculating the one pixel between each sample point in the background sample set corresponding to a pixel in the background model a distance, when the distance between the one pixel and the at least first predetermined number of sample points is less than or equal to a predetermined first threshold, replacing the one pixel with a sample point to update the background model; wherein the first threshold is based on the sensitivity It is determined that the sensitivity is negatively linearly related to the first threshold.
  • the central processing unit 1320 may be configured to: select the input image from the at least one frame image sequence, wherein the first selection unit selects one frame image as the input image every second predetermined number of frames.
  • the central processing unit 1320 can be configured to: calculate the resolution of the input image detection area, The average of the gradient magnitudes of the pixels in the detection area is taken as the sharpness.
  • the central processing unit 1320 may be configured to calculate a ratio of a sum of gradient magnitudes of each pixel in the input image detection area to a number of pixels in the input image detection area, and use the ratio as the sharpness.
  • the central processing unit 1320 can be configured to: calculate the resolution according to the following formula:
  • w is the width of the input image detection area
  • h is the height of the input image detection area
  • pixel_num is the number of pixels in the input image detection area
  • I is the pixel value
  • i and j are the pixel horizontal and vertical coordinates.
  • the central processing unit 1320 may be configured to select a region of interest in the input image, and use the region of interest as the detection region.
  • the negative linear correlation between the first threshold and the sensitivity is: Where R represents the first threshold and S represents the sensitivity, and the range of the first threshold is [a, b].
  • the central processing unit 1320 may be configured to extract a foreground image from the input image according to the updated background model.
  • the image processing apparatus 1300 may be disposed on a chip (not shown) connected to the central processing unit 1320, and the functions of the image processing apparatus 1300 may be implemented by the control of the central processing unit 1320.
  • the image processing apparatus 1300 does not necessarily have to include all the components shown in FIG. 13; in addition, the image processing apparatus 1300 may further include components not shown in FIG. 13, and reference may be made to the related art.
  • the sensitivity is defined for the background model in the VIBE algorithm, and the sensitivity is adjusted according to the sharpness of the image, thereby improving the accuracy of the foreground detection, avoiding blurring of the image and not extracting the complete foreground image. problem.
  • Embodiments of the present invention also provide a computer readable program, wherein when the program is executed in an image processing apparatus, the program causes a computer to execute an image processing method as in Embodiment 1 or 2 or 3 above in the image processing apparatus .
  • the embodiment of the present invention also provides a storage medium storing a computer readable program, wherein the computer readable program causes the computer to execute the image processing method in Embodiment 1 or 2 or 3 above in the image processing apparatus.
  • the above apparatus and method of the present invention may be implemented by hardware or by hardware in combination with software.
  • the present invention relates to a computer readable program that, when executed by a logic component, enables the logic component to implement the apparatus or components described above, or to cause the logic component to implement the various methods described above Or steps.
  • the present invention also relates to a storage medium for storing the above program, such as a hard disk, a magnetic disk, an optical disk, a DVD, a flash memory, or the like.
  • the method of image processing in an image processing apparatus described in connection with an embodiment of the present invention may be directly embodied as hardware, a software module executed by a processor, or a combination of both.
  • one or more of the functional blocks shown in Figures 9-13 and/or one or more combinations of functional blocks may correspond to various software modules of a computer program flow, or to individual hardware modules.
  • These software modules may correspond to the respective steps shown in Figures 1-3, 6-8, respectively.
  • These hardware modules can be implemented, for example, by curing these software modules using a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • a storage medium can be coupled to the processor to enable the processor to read information from, and write information to, the storage medium; or the storage medium can be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC.
  • the software module may be stored in a memory of the image forming apparatus or in a memory card insertable to the image forming apparatus.
  • One or more of the functional blocks described with respect to Figures 9-13 and/or one or more combinations of functional blocks may be implemented as a general purpose processor, digital signal processor (DSP) for performing the functions described herein.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • One or more of the functional blocks described with respect to Figures 9-13 and/or one or more combinations of functional blocks may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors One or more microprocessors in conjunction with DSP communication or any other such configuration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention porte sur un procédé et un dispositif de traitement d'image, le procédé de traitement d'image consistant : à calculer la sensibilité d'un modèle d'arrière-plan selon la clarté d'une zone de détection d'une image entrée, la sensibilité étant corrélée négativement avec la clarté lorsque la clarté s'inscrit dans une première plage prédéfinie ; et à traiter chaque pixel dans l'image entrée, une étape de traitement d'un pixel comprenant le calcul de la distance entre le pixel et chaque point échantillon d'un ensemble d'échantillons d'arrière-plan correspondant au pixel dans le modèle d'arrière-plan, et, lorsque la distance entre le pixel et au moins une première quantité prédéfinie des points échantillons est inférieure à un premier seuil prédéfini, la détermination du pixel comme pixel d'arrière-plan dans l'image entrée, le premier seuil étant déterminé en fonction de la sensibilité, qui est corrélée négativement et linéairement avec le premier seuil. Le dispositif selon le mode de réalisation définit la sensibilité d'un modèle d'arrière-plan dans l'algorithme ViBe, et ajuste la sensibilité en fonction de la clarté d'une image. Par conséquent, la précision de la détection de premier plan est meilleure, ce qui évite le problème de l'impossibilité d'extraire une image de premier plan complète car l'image est floue.
PCT/CN2016/102120 2016-10-14 2016-10-14 Procédé et dispositif de traitement d'image WO2018068300A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2019518041A JP6835215B2 (ja) 2016-10-14 2016-10-14 画像処理方法及び装置
PCT/CN2016/102120 WO2018068300A1 (fr) 2016-10-14 2016-10-14 Procédé et dispositif de traitement d'image
CN201680088030.XA CN109478329B (zh) 2016-10-14 2016-10-14 图像处理方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/102120 WO2018068300A1 (fr) 2016-10-14 2016-10-14 Procédé et dispositif de traitement d'image

Publications (1)

Publication Number Publication Date
WO2018068300A1 true WO2018068300A1 (fr) 2018-04-19

Family

ID=61906103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102120 WO2018068300A1 (fr) 2016-10-14 2016-10-14 Procédé et dispositif de traitement d'image

Country Status (3)

Country Link
JP (1) JP6835215B2 (fr)
CN (1) CN109478329B (fr)
WO (1) WO2018068300A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062273A (zh) * 2019-12-02 2020-04-24 青岛联合创智科技有限公司 一种遗留物品追溯检测与报警方法
CN111432206A (zh) * 2020-04-24 2020-07-17 腾讯科技(北京)有限公司 基于人工智能的视频清晰度处理方法、装置及电子设备
CN111626188A (zh) * 2020-05-26 2020-09-04 西南大学 一种室内不可控明火监测方法及系统
CN112258467A (zh) * 2020-10-19 2021-01-22 浙江大华技术股份有限公司 一种图像清晰度的检测方法和装置以及存储介质
CN113505737A (zh) * 2021-07-26 2021-10-15 浙江大华技术股份有限公司 前景图像的确定方法及装置、存储介质、电子装置
CN114205642A (zh) * 2020-08-31 2022-03-18 北京金山云网络技术有限公司 一种视频图像的处理方法和装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3800615A1 (fr) * 2019-10-01 2021-04-07 Axis AB Procédé et dispositif d'analyse d'images
CN112508898A (zh) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 眼底图像的检测方法、装置及电子设备
CN113808154A (zh) * 2021-08-02 2021-12-17 惠州Tcl移动通信有限公司 一种视频图像处理方法、装置、终端设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252A1 (fr) * 2007-07-08 2009-01-14 Université de Liège Extracteur d'arrière-plan visuel
CN103971386A (zh) * 2014-05-30 2014-08-06 南京大学 一种动态背景场景下的前景检测方法
CN104392468A (zh) * 2014-11-21 2015-03-04 南京理工大学 基于改进视觉背景提取的运动目标检测方法
CN105894534A (zh) * 2016-03-25 2016-08-24 中国传媒大学 一种基于ViBe的改进运动目标检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2015252A1 (fr) * 2007-07-08 2009-01-14 Université de Liège Extracteur d'arrière-plan visuel
CN103971386A (zh) * 2014-05-30 2014-08-06 南京大学 一种动态背景场景下的前景检测方法
CN104392468A (zh) * 2014-11-21 2015-03-04 南京理工大学 基于改进视觉背景提取的运动目标检测方法
CN105894534A (zh) * 2016-03-25 2016-08-24 中国传媒大学 一种基于ViBe的改进运动目标检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BI, GUOLING: "Research of Several Key Techniques in Intelligent Video Surveillance System", DOCTORAL DISSERTATION OF UNIVERSITY OF CHINESE ACADEMY OF SCIENCES, no. chapter 4, 31 May 2015 (2015-05-31) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062273A (zh) * 2019-12-02 2020-04-24 青岛联合创智科技有限公司 一种遗留物品追溯检测与报警方法
CN111062273B (zh) * 2019-12-02 2023-06-06 青岛联合创智科技有限公司 一种遗留物品追溯检测与报警方法
CN111432206A (zh) * 2020-04-24 2020-07-17 腾讯科技(北京)有限公司 基于人工智能的视频清晰度处理方法、装置及电子设备
CN111626188A (zh) * 2020-05-26 2020-09-04 西南大学 一种室内不可控明火监测方法及系统
CN111626188B (zh) * 2020-05-26 2022-05-06 西南大学 一种室内不可控明火监测方法及系统
CN114205642A (zh) * 2020-08-31 2022-03-18 北京金山云网络技术有限公司 一种视频图像的处理方法和装置
CN114205642B (zh) * 2020-08-31 2024-04-26 北京金山云网络技术有限公司 一种视频图像的处理方法和装置
CN112258467A (zh) * 2020-10-19 2021-01-22 浙江大华技术股份有限公司 一种图像清晰度的检测方法和装置以及存储介质
CN113505737A (zh) * 2021-07-26 2021-10-15 浙江大华技术股份有限公司 前景图像的确定方法及装置、存储介质、电子装置
CN113505737B (zh) * 2021-07-26 2024-07-02 浙江大华技术股份有限公司 前景图像的确定方法及装置、存储介质、电子装置

Also Published As

Publication number Publication date
CN109478329B (zh) 2021-04-20
CN109478329A (zh) 2019-03-15
JP2019531552A (ja) 2019-10-31
JP6835215B2 (ja) 2021-02-24

Similar Documents

Publication Publication Date Title
WO2018068300A1 (fr) Procédé et dispositif de traitement d'image
US11430103B2 (en) Method for image processing, non-transitory computer readable storage medium, and electronic device
WO2019148912A1 (fr) Procédé de traitement d'image, appareil, dispositif électronique et support d'informations
Park et al. Single image dehazing with image entropy and information fidelity
CN108833785B (zh) 多视角图像的融合方法、装置、计算机设备和存储介质
US8280106B2 (en) Shadow and highlight detection system and method of the same in surveillance camera and recording medium thereof
US10762655B1 (en) Disparity estimation using sparsely-distributed phase detection pixels
CN111340752A (zh) 屏幕的检测方法、装置、电子设备及计算机可读存储介质
US10291931B2 (en) Determining variance of a block of an image based on a motion vector for the block
WO2017215527A1 (fr) Procédé, dispositif et support de stockage informatique de détection de scénario hdr
Shi et al. Single image dehazing in inhomogeneous atmosphere
CN102542552B (zh) 视频图像的顺逆光判断和拍摄时间检测方法
US11836903B2 (en) Subject recognition method, electronic device, and computer readable storage medium
US8565491B2 (en) Image processing apparatus, image processing method, program, and imaging apparatus
CN107292828B (zh) 图像边缘的处理方法和装置
US20190206065A1 (en) Method, system, and computer-readable recording medium for image object tracking
US9894285B1 (en) Real-time auto exposure adjustment of camera using contrast entropy
CN107578424B (zh) 一种基于时空分类的动态背景差分检测方法、系统及装置
US20120320433A1 (en) Image processing method, image processing device and scanner
KR101026778B1 (ko) 차량 영상 검지 장치
CN104299234B (zh) 视频数据中雨场去除的方法和系统
JP5338762B2 (ja) ホワイトバランス係数算出装置及びプログラム
US20130342757A1 (en) Variable Flash Control For Improved Image Detection
CN109255797B (zh) 图像处理装置及方法、电子设备
CN111563517B (zh) 图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16918895

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019518041

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16918895

Country of ref document: EP

Kind code of ref document: A1